**Soft and Biological Matter**

# Federico Toschi Marcello Sega *Editors*

# Flowing Matter

# **Soft and Biological Matter**

# **Series Editors**

David Andelman, School of Physics and Astronomy, Tel Aviv University, Tel Aviv, Israel Wenbing Hu, School of Chemistry and Chemical Engineering, Department of Polymer Science and Engineering, Nanjing University, Nanjing, China Shigeyuki Komura, Department of Chemistry, Graduate School of Science and Engineering, Tokyo Metropolitan University, Tokyo, Japan Roland Netz, Department of Physics, Free University of Berlin, Berlin, Germany Roberto Piazza, Department of Chemistry, Materials Science, and Chemical Engineering "G. Natta", Polytechnic University of Milan, Milan, Italy Peter Schall, Van der Waals-Zeeman Institute, University of Amsterdam, Amsterdam, The Netherlands Gerard Wong, Department of Bioengineering, California NanoSystems Institute, UCLA, Los Angeles, CA, USA

"Soft and Biological Matter" is a series of authoritative books covering established and emergent areas in the realm of soft matter science, including biological systems spanning all relevant length scales from the molecular to the mesoscale. It aims to serve a broad interdisciplinary community of students and researchers in physics, chemistry, biophysics and materials science.

Pure research monographs in the series, as well as those of more pedagogical nature, will emphasize topics in fundamental physics, synthesis and design, characterization and new prospective applications of soft and biological matter systems. The series will encompass experimental, theoretical and computational approaches. Topics in the scope of this series include but are not limited to: polymers, biopolymers, polyelectrolytes, liquids, glasses, water, solutions, emulsions, foams, gels, ionic liquids, liquid crystals, colloids, granular matter, complex fluids, microfluidics, nanofluidics, membranes and interfaces, active matter, cell mechanics and biophysics.

Both authored and edited volumes will be considered.

More information about this series at http://www.springer.com/series/10783

Federico Toschi • Marcello Sega Editors

# Flowing Matter

Funded by the Horizon 2020 Framework Programme of the European Union

*Editors* Federico Toschi Department of Applied Physics University of Technology Eindhoven Eindhoven, The Netherlands

Marcello Sega Forschungszentrum Julich ¨ Helmholtz Institute Erlangen-Nurnberg ¨ for Renewable Energy Nuremberg, Germany

This article/publication is based upon the work from COST Action MP1305, supported by COST (European Cooperation in Science and Technology).

COST (European Cooperation in Science and Technology; www.cost.eu) is a funding agency for research and innovation networks. Our Actions help connect research initiatives across Europe and enable scientists to grow their ideas by sharing them with their peers. This boosts their research, career and innovation.

ISSN 2213-1736 ISSN 2213-1744 (electronic) Soft and Biological Matter ISBN 978-3-030-23369-3 ISBN 978-3-030-23370-9 (eBook) https://doi.org/10.1007/978-3-030-23370-9

© The Editor(s) (if applicable) and The Author(s) 2019. This book is an open access publication. **Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

# **Preface**

Flowing Matter is the term that probably best describes the macroscopic behaviour emerging from the coordinated dynamics of microscopic entities. Flowing Matter, therefore, goes well beyond the realm of classical fluid mechanics, traditionally dealing with the dynamics of molecules in liquids, to include the dynamics of fluids with a complex internal structure as well as the emergent dynamics of interacting active agents.

Flowing Matter research lies at the border between physics, mathematics, chemistry, engineering, biology, and earth sciences, to cite a few. Flowing Matter also involves an extensive range of different experimental, numerical, and theoretical approaches. The three main research areas in Flowing Matter are complex fluids, active matter, and complex flows:


Many relevant scientific and technological problems today lie across two or even three of these major research areas. It is clear, therefore, that a multidisciplinary approach is needed in order to develop a unified picture in the field. The Flowing Matter MP1305 COST Action was established in 2014, aiming at bringing together the scientific communities working on these areas and at helping to advance towards a unified approach and understanding of Flowing Matter.

During the 4 years of its activity, Flowing Matter managed to foster scientific exchange between researchers active in its different areas, filling what was a gap in the communication network and facilitating the exchange of methods and best practices.

This book is the last activity organised by the MP1305 COST Action and represents just a small part of its heritage, beyond the many scientific meetings, discussions, and publications that were fostered by the COST Action.

This book is meant for young scientists as well as for any researcher aiming at broadening his/her view on Flowing Matter. This book reflects, in a very concise way, the original spirit of the COST Action and covers, from its main topics, different methodologies, experiments, theory, numerical methods, and applications.

Nuremberg, Germany Marcello Sega Eindhoven, The Netherlands Federico Toschi February 2019

# **Contents**


# **Chapter 1 Numerical Approaches to Complex Fluids**

**Marco E. Rosti, Francesco Picano, and Luca Brandt**

# **1.1 Introduction to Complex Fluids and Rheology**

We are surrounded by a variety of fluids in our everyday life. Besides water and air, it is common to deal with fluids with peculiar behaviours such as gel, mayonnaise, ketchup and toothpaste, while water, oil and other so-called *simple* (Newtonian) fluids "regularly" flow when we apply a force, the response is different for complex fluids. In some cases, we need to apply a stress larger than a certain threshold for the material to start flowing, for example, to extract toothpaste from the tube; the same paste would behave as a solid on the toothbrush when exposed only to the gravitational force. In other cases the history of past deformations has a role in the present behaviour. Rheology studies and classifies the response of different fluids and materials to an applied force, and to this end, how the macroscopic behaviour is linked to the microscopic structure of the fluid. Hence, while simple fluids made by identical molecules show a linear response to the applied forces, complex fluids with a microstructure, such as suspensions, may show a very complex response to the applied forces.

In this chapter, we introduce numerical approaches for complex fluids focusing on the way the additional stress due to the presence of a microstructure is modelled and how rigid and deformable intrusions can be simulated. We will assume the reader has a solver for the momentum and mass conservation equations, typically using a finite-difference or finite-volume representation. An alternative approach, also very popular, are Lattice–Boltzmann methods; these will not be considered here, thus the reader is referred to Refs. [1, 2].

M. E. Rosti · L. Brandt (-)

Linné FLOW Centre and SeRC, KTH Mechanics, Stockholm, Sweden e-mail: luca@mech.kth.se

F. Picano Department of Industrial Engineering, University of Padova, Padua, Italy

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_1

### **Newtonian and Non-Newtonian Rheology**

The macroscopic rheological behaviour of a viscous fluid is well characterised in a Couette flow, i.e. the flow between two parallel walls of area *A* and at distance *b*: the upper wall moving at constant (low) velocity *U*<sup>0</sup> and the lower at rest. To keep the upper wall moving at constant velocity we need to apply a force *F* which is proportional to the wall area: *F* ∝ *A*. Therefore it is more general to consider the stress *τ* = *F/A* instead of the force *F* itself. In a *Newtonian* fluid the shear stress is proportional to the velocity of the upper wall and to the inverse of the wall distance *b*, i.e. *τ* ∝ *U*0*/b*. This linear response defines Newtonian fluids, such as air, water, oil and many others. Note that, in a simple Couette flow the ratio *U*0*/b* equals the wall-normal derivative of the velocity profile and the shear (deformation) rate: *du/dy* = ˙*γ* = *U*0*/b*. Thus, for a Newtonian fluid we can express the law relating the applied force with the response, i.e. the shear stress *τ* with the shear rate *γ*˙, as

$$
\pi = \mu \dot{\boldsymbol{\gamma}},\tag{1.1}
$$

where the proportionality coefficient *μ* is called dynamic viscosity with dimension *Pas* in the SI. Many Newtonian fluids exist, each with a different value of viscosity, and therefore flowing at different velocity when subject to the same stress. The viscosity coefficient of a Newtonian fluid does not depend on the shear rate, but may vary with the temperature. Indeed, the viscosity usually increases with temperature in gases, while it decreases in liquids. This behaviour is related to the effect of the temperature on the molecular structure of the fluid, but this is outside the scope of present chapter and the reader is refereed to specialised textbooks.

Fluids that exhibit a non-linear behaviour between the shear stress *τ* and the shear rate *γ*˙ are called *non-Newtonian* and fluids whose response does not depend explicitly on time but only on the present shear rate are denoted *generalised Newtonian* fluids. In particular, when the shear stress increases more than linearly with the shear rate, the fluid is called *dilatant* or *shear-thickening*, whereas in the case of opposite behaviour, i.e. when the shear stress increases less than linearly with the shear rate, the fluid is called *pseudoplastic* or *shear-thinning*. Examples of typical profiles of the shear stress *τ* as a function of the shear rate *γ*˙ for Newtonian, shear-thickening and shear-thinning fluids are shown in the left panel of Fig. 1.1. The ratio of the applied stress and the resulting deformation rate is the so-called apparent effective viscosity *μe* = *τ/γ*˙: it increases with *γ*˙ for shear-thickening fluids, while it reduces for shear-thinning ones, which means that the fluidity of shear-thickening fluids reduces increasing the shear rate, while the opposite is true for shear-thinning fluids. Examples of shear-thinning fluids are ketchup, mayonnaise and toothpaste, while corn-starch water mixtures and dense non-colloidal suspensions usually exhibit a shear-thickening behaviour. Note that, sometimes, the same fluids can have plastic or elastic responses depending on the flow configuration.

Complex fluids may behave as solids, with a finite deformation, when the applied stress is below a certain threshold *τ*0, while for stresses above it, they start

**Fig. 1.1** (left) Sketch of a plane Couette flow. (right) Sketch of the shear stress *τ* profile as a function of the shear deformation rate *γ*˙ for different kind of fluids

flowing as liquids. These fluids are called *yield stress* or *Bingham* fluids: when the applied stress exceeds the so-called yield stress, *τ*0, these fluids can exhibit a linear relation between stress and deformation similar to Newtonian fluids or a pseudoplastic response. These macroscopic behaviours are related to changes of the microscopic structure of the fluid, and indeed these fluids are constituted by a Newtonian fluid with one or more suspended phases, such as fibres, polymers, trapped fluids (emulsions). From a qualitative point of view, the material hardly flows and deforms when the connections and interactions between the phases constituting the microstructure are intense. Changing the level of the stress *τ* applied on these complex fluids may either strengthen, weaken or break these interactions, thus altering their microstructure, and eventually reflecting in their nonlinear rheological behaviour.

In order to describe complex fluids, we need a relation as in Eq. (1.1) between the applied stress, *τ* , and the deformation rate, *γ*˙. A relation that can be used to summarise the behaviours previously described for complex fluids is the Herschel– Bulkley formula

$$
\pi = \pi\_0 + K \,\dot{\nu}^n,\tag{1.2}
$$

where *τ*<sup>0</sup> is the yield stress, *n* the flow index and *K* the fluid consistency index. A Newtonian behaviour is recovered when *τ*<sup>0</sup> = 0, *n* = 1 and *K* = *μ*, while values of the flow index above and below unity, *n >* 1 and *n <* 1, denote shear-thickening and shear-thinning fluids, respectively. Finally, yield-stress fluids are characterised by a finite non-zero value of the yield stress *τ*0. The consistency index *K* measures how strong the fluid responds to the imposed deformation rate. However, the consistency index has the same dimension of a dynamic viscosity only when *n* = 1, and in general its dimension is a function of *n*, so that it is not possible to compare different values of *K* for fluids with different flow indexes *n*.

The fluid discussed so far are inelastic, since the stress is just a function of the present value of the deformation rate, i.e. *τ* = *τ (γ )*˙ , and not on the previous history of the deformation rate (no memory effects). Another important class of non-Newtonian fluids, which cannot be described by the Herschel–Bulkley formula, is that of *viscoelastic* fluids. These materials have property similar to both a viscous liquid and an elastic solid. Indeed, the deformation is not anymore permanent, as in usual fluids, and depends on both viscous and elastic contributions. When a constant stress *τ* is applied the deformation of a viscoelastic fluid increases with time, but when the applied stress is removed, the fluid tends to recover its original configuration (similarly to elastic solids). Polymer solutions usually experience a viscoelastic behaviour, another culinary example being pizza dough: when softly pressed it deforms, but when the pressure is removed the original shape is recovered. However, if the dough is strongly deformed, we can rearrange it in a new stable configuration similarly to what happens in fluids. Memory and elastic effect are difficult to model, and typically require information about the microstructure deformation.

In some applications, complex fluids can be successfully modelled just by considering that their response is related to the memory of the deformation rate history; in other words, they have a time-dependent viscosity if exposed to a constant value of the shear rate. Two main kind of such fluids can be identified: *thixotropic* fluids whose effective viscosity decreases with the accumulated strain and *rheopectic* fluids, whose effective viscosity increases with the accumulated strain. A classic example of a fluid characterised by a thixotropic behaviour is painting whose apparent viscosity increases when the deformation rate reduces in order to better adhere to a surface. Rheopectic fluids are less common, and an example is the synovial fluid in our knees, whose property facilitates the absorption of shocks. Thixotropic and rheopectic fluids are usually modelled by a timedependent viscosity, function of a scalar parameter that represents the evolution of their microstructure.

# **1.2 Macroscopic Approaches**

# *1.2.1 Eulerian/Eulerian Methods*

### **Inelastic Shear-Thinning/Thickening Fluids**

Shear-thinning and shear-thickening are possibly the simplest non-Newtonian behaviours of fluids, when the viscosity *μ* decreases and increases under shear, i.e. *μ* = *μ(γ )*; these behaviours are only rarely observed in pure materials, but can often occur in suspensions. Despite its simplicity, this behaviour is able to capture the main effects induced by a microstructure in many applications. Several models have been developed to describe these fluids, an example of *shear-thinning* model being the Carreau law, usually used to describe generalised fluid where viscosity depends upon shear rate. The model is able to properly describe pseudoplastic fluid viscosity for many engineering application [3], and assumes an isotropic viscosity proportional to some power of the shear rate [4]:

$$\frac{\mu}{\mu\_0} = \frac{\mu\_\infty}{\mu\_0} + \left(1 - \frac{\mu\_\infty}{\mu\_0}\right) \left[1 + \left(\lambda \dot{\nu}\right)^2\right]^\frac{n-1}{2} \,. \tag{1.3}$$

In the previous relation, *μ* is the viscosity, *μ*<sup>0</sup> and *μ*<sup>∞</sup> the zero and infinite shear rate viscosities, *λ* the relaxation time and *n <* 1 the power index; the second invariant of the strain-rate tensor *γ*˙ can be determined as *γ*˙ = 2 *Sij* : *Sij* , where *Sij* = 1 2 *∂ui ∂xj* <sup>+</sup> *∂uj ∂xi* . At low shear rate (*γ*˙ 1*/λ*) a Carreau fluid behaves as Newtonian, while at high shear rate (*γ*˙ 1*/λ*) as a power-law fluid. For *shear-thickening* fluid a simple power-law model is frequently used,

$$\frac{\mu}{\mu\_0} = \mathcal{M}\dot{\nu}^{n-1},\tag{1.4}$$

which reproduces a monotonic increase of the viscosity with the local shear rate for *n >* 1. The constant M is called the consistency index and indicates the slope of the viscosity profile. More details on the Carreau and power-law models can be found in Ref. [4].

From a numerical point of view, implementation of a shear dependent viscosity is often straightforward; however, high variations of viscosity may result in significantly time step constraint when explicit schemes are used, and disrupt the solution technique usually used to solve implicitly the viscous terms in the momentum equation. Indeed, the diffusive term cannot be reduced to a constant coefficient Laplace operator since the viscosity is now a function of space. Dodd and Ferrante [5] have introduced a splitting operator technique able to overcome this drawback, initially derived for the pressure Poisson equation; however, this splitting approach can easily be extended to the Helmholtz equation resulting from an implicit (or semi-implicit) integration of the diffusive terms as well. In particular, the viscosity is split in a constant part and in a space-varying component, i.e. *μ(x)* = *μ*0+*μ (x)*, and the resulting diffusive term split consequently in a constant coefficients operator that can be treated implicitly, and in a variable coefficients operator which can be treated explicitly.

### **Viscoelastic Fluids**

Viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Unlike purely elastic substances, a viscoelastic substance has an elastic component and a viscous component, and the latter gives the substance a strain rate dependence on time. Viscoelastic materials have been often modelled in the past as linear combinations of springs and dashpots; famous examples are the Maxwell model, represented by a purely viscous damper

**Fig. 1.2** Sketch of the mechanical model of the (**a**) Kelvin–Voigt model, (**b**) Oldroyd-B viscoelastic model and of the (**c**) elastoviscoplastic fluid proposed by Saramito

and a purely elastic spring connected in series, the Kelvin–Voigt model (Fig. 1.2a), made by a Newtonian damper and a Hookean elastic spring connected in parallel, and the standard linear solid model, which combines the Maxwell model and a Hookean spring in parallel. In 1950 Oldroyd proposed a famous viscoelastic model [6], often called Oldroyd-B model (Fig. 1.2b), where the fluid is assumed to consist of dumbbells, beads connected elastic springs. In a frame-independent form, it can be expressed in terms of the upper-convected derivative of the stress tensor

$$
\lambda \left( \frac{\partial \mathbf{\tau}\_{lj}}{\partial t} + \mu\_k \frac{\partial \mathbf{\tau}\_{lj}}{\partial \mathbf{x}\_k} - \mathbf{\tau}\_{kj} \frac{\partial \mu\_l}{\partial \mathbf{x}\_k} + \mathbf{\tau}\_{lk} \frac{\partial \mu\_j}{\partial \mathbf{x}\_k} \right) + \mathbf{\tau}\_{lj} = 2\eta\_m \mathbf{S}\_{lj}, \tag{1.5}
$$

where *τij* is the stress tensor, *λ* the relaxation time, *ηm* the material viscosity and *Sij* the rate of strain tensor. Although the model provides good approximations of viscoelastic fluids in shear flow, it has an unphysical singularity in extensional flow, where the dumbbells are infinitely stretched [7]. In order to overcome this problem, the finite elastic non-linear elastic (FENE) model has been proposed; it consists of a sequence of beads with non-linear springs, with forces governed by the inverse Langevin function. Subsequently, the finite elastic non-linear extensibility-Peterlin (FENE-P) model has been developed, by extending the dumbbell version of the FENE model and assuming the Peterlin statistical closure for the restoring force. The model is suited for numerical simulations, since it removes the need of statistical averaging at each grid point at any instant in time, and because the polymer suspension is treated as a continuum and its dynamics represented by an evolution equation of the phase-averaged configuration tensor *Cij* , a symmetric second-order tensor defined as *Cij* =*< qiqj >*, where *qi* are the components of the end-to-end vector for a polymer molecule. The evolution of the polymer conformation is governed by the balance of stretching and restoring forces in an Eulerian framework, such that the transport equation for the conformation tensor can be expressed as

$$\frac{\partial C\_{lj}}{\partial t} + \mu\_k \frac{\partial C\_{lj}}{\partial \mathbf{x}\_k} = C\_{kj} \frac{\partial \mu\_l}{\partial \mathbf{x}\_k} + C\_{lk} \frac{\partial \mu\_j}{\partial \mathbf{x}\_k} - \mathbf{r}\_{lj}, \tag{1.6}$$

#### 1 Numerical Approaches to Complex Fluids 7

where *τij* is the polymeric stress tensor, defined as

$$
\pi\_{lj} = \frac{1}{\lambda} \left( \frac{C\_{lj}}{1 - \frac{C\_{kk}}{L^2}} - \delta\_{lj} \right),
\tag{1.7}
$$

with *L* the maximum polymer extensibility, *δij* the Kronecker delta and *λ* the polymer relaxation time. A non-dimensional number can be defined based on the polymer relaxation time *λ*, which is usually called Weissenberg number *W e*, and is defined as

$$We = \frac{\lambda U^{ref}}{L^{ref}}.\tag{1.8}$$

The previous transport equation is a balance between the advection of the configuration tensor on the left-hand side, and the stretching and relaxation of the polymer, represented by the first two terms and the last one on the right-hand side, respectively. Polymer stresses result from the action of polymer molecules to keep their configuration close to the highest entropic state, i.e. the coiled configuration (see Refs. [3, 8]). The polymer stress is then added to the momentum equation, the Navier–Stokes equation for an incompressible flow in the case of polymer solutions.

The numerical solution of Eq. (1.6) is cumbersome, and many researchers showed that the numerical solution of a viscoelastic fluid is unstable, especially in the case of high Weissenberg numbers, since any disturbance amplifies over time [9–11]. Indeed, the numerical solution of this equation can easily diverge and lead to the numerical breakdown since it is an advection equation without any diffusion term [12]. One of the earliest solution to this problem has been to introduce a global *artificial diffusivity* (AD) to the transport equation of the conformation tensor [11, 13, 14] by adding to the right-hand side of Eq. (1.6) the term *<sup>k</sup> <sup>∂</sup>*2*Cij ∂xk ∂xk* , where *k* is a coefficient. Subsequently, global AD was replaced by local AD, where the diffusion is applied only to locations where the tensor *Cij* experiences a loss of positiveness. Recently, researchers started to use high-order *weighted essentially non-oscillatory* (WENO) schemes [15] for the advection terms in the equation. WENO scheme are non-linear finite-volume or finite-difference methods which can numerically approximate solutions of hyperbolic conservation laws and other convection dominated problems with high-order accuracy in smooth regions and essentially non-oscillatory transition for solution discontinuities. Apart from that, the governing differential equations can be solved on a staggered grid using a second-order central finite-difference scheme. This methodology has been proved to work properly by Sugiyama et al. [16] and also successfully used in Refs. [17– 19]. A comprehensive review on the properties of different numerical schemes for the advection terms is reported by Min et al. [9].

An alternative methodology to overcome such problems is the so-called *logrepresentation of the conformation tensor* that ensures the positive-definiteness of the tensor *Cij* , even at high Weissenberg number [20–23]; this consists in solving equivalent transport equations for *Aij* = log*Cij* , instead of the ones for the conformation tensor *Cij* . Following the notation used in Ref. [23], we write *<sup>A</sup>* <sup>=</sup> log *<sup>C</sup>* <sup>=</sup> *<sup>R</sup>* log *DR<sup>T</sup>* , where *<sup>D</sup>* is a diagonal matrix containing the eigenvalues of *C* and *R* an orthogonal matrix containing the eigenvectors of *C*. First, we define a decompose of the velocity gradient such that *(*∇*u) <sup>T</sup>* <sup>=</sup> *-* <sup>+</sup> *<sup>B</sup>* <sup>+</sup> *NC*−1; note that,  and *N* are antisymmetric and that *B* is traceless, symmetric and commutes with *<sup>C</sup>*. Next, we introduce four new matrices, *<sup>M</sup>* <sup>=</sup> *<sup>R</sup><sup>T</sup> (*∇*u) <sup>T</sup> R*, *-* <sup>=</sup> *<sup>R</sup><sup>T</sup> -R*, *B* <sup>=</sup> *<sup>R</sup><sup>T</sup> BR* and *<sup>N</sup>* <sup>=</sup> *<sup>R</sup><sup>T</sup> NR*, and rewrite the decomposition of the velocity gradient as *<sup>M</sup>* <sup>=</sup> *-* <sup>+</sup> *<sup>B</sup>* <sup>+</sup> *ND* <sup>−</sup>1. Note that, in order to ensure a unique decomposition *B* is diagonal, while  and *<sup>N</sup>* are antisymmetric matrices. *<sup>N</sup>* and can then be found by satisfying the equations

$$
\widetilde{\boldsymbol{B}} + \frac{1}{2} \left( \widetilde{\boldsymbol{N}} \boldsymbol{D}^{-1} \boldsymbol{D}^{-1} \widetilde{\boldsymbol{N}} \right) = \frac{1}{2} \left( \widetilde{\boldsymbol{M}} + \widetilde{\boldsymbol{M}}^T \right) \tag{1.9}
$$

and

$$
\widetilde{\mathfrak{Q}} + \frac{1}{2} \left( \widetilde{N} \mathcal{D}^{-1} + \mathcal{D}^{-1} \widetilde{N} \right) = \frac{1}{2} \left( \widetilde{M} - \widetilde{M}^T \right). \tag{1.10}
$$

Finally, the original transport equation for *C* is rewritten into an equivalent one for *A*

$$\frac{\partial A}{\partial t} + (\boldsymbol{u} \cdot \nabla)A - \left(\boldsymbol{\Omega}A - A\boldsymbol{\Omega}\right) - 2B = \frac{1}{We}\left(e^{-A} - I\right)\frac{\alpha}{We}\left(e^{-A} - I\right)^2,\tag{1.11}$$

where *<sup>e</sup><sup>A</sup>* <sup>=</sup> *RDR<sup>T</sup>* and *<sup>e</sup>*−*<sup>A</sup>* <sup>=</sup> *RD*−1*R<sup>T</sup>* .

### **Plastic Effects**

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behaviour of solids. Rate dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The first viscoplastic rheological model based on yield stress (stress at which a material begins to deform plastically) was proposed by Schwedoff [24] as a plastic viscoelastic version of the Maxwell model:

$$\begin{cases} \dot{\varepsilon} = 0 & \text{if } \mathfrak{r} \le \mathfrak{r}\_0 \\ \lambda \frac{d\mathfrak{r}}{dt} + (\mathfrak{r} - \mathfrak{r}\_0) = \eta\_m \dot{\varepsilon} \text{ if } \mathfrak{r} > \mathfrak{r}\_0 \end{cases} \tag{1.12}$$

where *ε*˙ is the rate of deformation, *ηm* the solid viscosity and *τ*<sup>0</sup> the yield stress. The previous model states that when the stress *τ* is less than the yield stress *τ*0, the material is completely solid, and the rate of deformation is zero, while when the stress is greater than the yield value, it behaves as a fluid. Note that, at steady state, we obtain *τ* = *τ*<sup>0</sup> + *ηmε*˙. Bingham [25] proposed a similar model:

$$\max\left(0, \frac{|\tau| - \tau\_0}{|\tau|}\right)\tau = \eta\_m \dot{\varepsilon},\tag{1.13}$$

which can be rewritten as

$$\begin{cases} \dot{\varepsilon} = 0 & \text{if } |\tau| \le \tau\_0 \\ \frac{|\tau| - \tau\_0}{|\tau|} \tau = \eta\_m \dot{\varepsilon} \text{ if } |\tau| > \tau\_0 \end{cases} \tag{1.14}$$

Bingham model is exactly equivalent to the steady case of the one proposed by Schwedoff for positive rates of deformation. In 1947, Oldroyd modified the Bingham model and proposed the following constitutive equation [26]:

$$\begin{cases} \mathfrak{r} = \mu \mathfrak{s} & \text{if } |\mathfrak{r}| \le \mathfrak{r}\_0 \\ \frac{|\mathfrak{r}| - \mathfrak{r}\_0}{|\mathfrak{r}|} \mathfrak{r} = \eta\_m \dot{\mathfrak{s}} \text{ if } |\mathfrak{r}| > \mathfrak{r}\_0 \end{cases} \tag{1.15}$$

which combines the yielding criterion with a linear Hookean elastic behaviour before yielding and a viscous behaviour after yielding. Differently from the previously described models, here when the stress is less than the yield value, the material is not completely rigid. The numerical simulation of a Bingham fluid is not a straightforward task, because of the mathematical non-smoothness of the model and the indeterminacy of the stress tensor below the yield stress threshold [27]. Two kind of solution methods has been proposed in the literature, the *regularisation approach* [28–32] and the *augmented Lagrangian* [33–40] algorithm. The former solution method consists in modifying the constitutive equation in order to avoid the numerical and mathematical complexities, while the second consists in solving the whole problem as a minimisation of a functional with a step descent Uzawa algorithm [41]. In other words, the former method consists in solving modified equations which are computationally more permissive, while the second solves the actual yield stress model, but is computationally much more expensive. Among the first category of regularised approaches, in 1987, Papanastasiou [29] developed a modified constitutive relation for Bingham plastics whose main feature is that the tracking of the yield surfaces is completely eliminated. The model assumes

$$
\pi = \left[\mu + \frac{\tau\_0}{|\dot{\mathcal{Y}}|} \left(1 - e^{-\mathcal{M}|\dot{\mathcal{Y}}|}\right)\right] \dot{\mathcal{Y}},\tag{1.16}
$$

where M is a constant that, when chosen sufficiently big, provides a quick stress growth even at relatively low strain rates. This behaviour is consistent with materials in their practically unyielded state, i.e. plastic material that exhibits little or no deformation up to a certain level of stress determined by the yield stress. Due to the fast growing stress, this model has been sometimes used to represent fluids that exhibit extreme shear-thickening behaviour.

Motivated by experimental observations, where yield-stress fluid have an elastic response, Saramito [42, 43] combined the Bingham and Oldroyd models, and proposed a model for elastoviscoplastic fluids (Fig. 1.2c)

$$
\lambda \frac{d\tau}{dt} + \max\left(0, \frac{|\tau| - \tau\_0}{|\tau|}\right)\tau = \eta\_m \dot{\varepsilon},\tag{1.17}
$$

where the total stress is again *σ* = *ηε*˙ + *τ* . While Schwedoff proposed a rigid behaviour when |*τ* | ≤ *τ*<sup>0</sup> and Oldroyd a change of model when reaching the yield value, Saramito assures a continuous change from a solid to a fluid behaviour of the material. The mechanical model is composed by a friction element inserted in the Oldroyd viscoelastic model: at stresses below the yield stress, the friction element remains rigid, and the whole system predicts only recoverable Kelvin– Voigt viscoelastic deformation due to a spring and a viscous element *η* in parallel. Note that, the elastic behaviour *τ* = *με* is expressed in differential form and that *μ* = *ηm/λ* is the elasticity of the spring. As soon as the strain energy exceeds the level required by the von Mises criterion [44], the friction element breaks allowing deformation of another viscous element (*ηm*), and the material is described by the Oldroyd viscoelastic model.

After expanding the time derivative in the previous equation, the general Saramito model can be written as

$$
\lambda \left( \frac{\partial \tau\_{lj}}{\partial t} + \mu\_k \frac{\partial \tau\_{lj}}{\partial \mathbf{x}\_k} - \tau\_{kj} \frac{\partial u\_l}{\partial \mathbf{x}\_k} + \tau\_{lk} \frac{\partial u\_j}{\partial \mathbf{x}\_k} \right) + \max \left( 0, \frac{|\tau\_{lj}^d| - \tau\_0}{|\tau\_{lj}^d|} \right) \tau\_{lj} = 2\eta\_m \mathbf{S}\_{lj}, \tag{1.18}
$$

where *τ <sup>d</sup> ij* <sup>=</sup> *τij*<sup>−</sup> <sup>1</sup> *<sup>N</sup> τkkδij* is the deviatoric part of *τij* , with *N* = 2 or 3 the dimension of the problem at hand, and *δij* the Kronecker delta. Note that for yield stress *τ*<sup>0</sup> = 0, the Oldroyd-B model is recovered. A non-dimensional number can be defined based on the field stress *τ*0, which is usually called Bingham number *Bn*, and is defined as

$$Bn = \frac{\tau\_0 L^{ref}}{\mu U^{ref}}.\tag{1.19}$$

The yield stress value of certain materials, for example, liquid metals, is a function of the temperature [45, 46]. Indeed, while in crystal solids the yielding involves bond switch in an orderly manner, in metallic glasses this should be determined by bond breakage [47, 48]. By computing separately the mechanical and thermal energies that are required for bond breakage, a simple relation between the yield stress and the temperature can be obtained: *τ*<sup>0</sup> = 50*ρ/M Tg* − *T* , where *T* is the ambient temperature, *ρ* the density, *M* the molar mass and *Tg* the glass transition temperature. Guan et al. [49] used molecular dynamic simulations and found that the yield strength and the temperature are well correlated through a simple expression

#### 1 Numerical Approaches to Complex Fluids 11

$$\left(\frac{T}{T\_0} + \left(\frac{\tau}{\tau\_0}\right)^2\right)^2 = 1,\tag{1.20}$$

where *T*<sup>0</sup> and *τ*<sup>0</sup> are viscosity-dependent, normalised constants.

The numerical solution of Eq. (1.18), similarly to Eq. (1.6), may be cumbersome. The use of high-order WENO schemes for the advection terms in the equation is suggested to have high-order accuracy in smooth regions and essentially nonoscillatory transition for solution discontinuities [50, 51]. The previously discussed log-representation of the equation can be used as well.

### **Fluid–Structure Interaction**

A fully Eulerian formulation of a fluid structure problem can be obtained with a technique similar to the one discussed in the previous sections. Indeed, we can consider fluid and solid motion governed by the conservation of momentum and the incompressibility constraint:

$$\frac{\partial u\_i^f}{\partial t} + \frac{\partial u\_i^f}{\partial x\_j} = \frac{1}{\rho} \frac{\partial \sigma\_{lj}^f}{\partial x\_j},\tag{1.21a}$$

$$\frac{\partial u\_l^f}{\partial x\_l} = 0,\tag{1.21b}$$

$$\frac{\partial u\_l^s}{\partial t} + \frac{\partial u\_l^s u\_j^s}{\partial x\_j} = \frac{1}{\rho} \frac{\partial \sigma\_{lj}^s}{\partial x\_j},\tag{1.21c}$$

$$\frac{\partial u\_l^s}{\partial x\_l} = 0,\tag{1.21d}$$

where the suffixes *<sup>f</sup>* and *<sup>s</sup>* are used to distinguish the fluid and solid phase. In the previous set of equations, *σij* is the Cauchy stress tensor. The kinematic and dynamic interactions between the fluid and solid phases are determined by enforcing the continuity of the velocity and traction force at the interface between the two phases

$$
\mu\_l^f = \mu\_l^s,\tag{1.22a}
$$

$$
\sigma\_{ij}^f n\_j = \sigma\_{ij}^s n\_j,\tag{1.22b}
$$

where *ni* denotes the normal vector. The problem at hand can be solved numerically by using the so-called one-continuum formulation [52], where only one set of equations is solved over the whole domain. This is achieved by introducing a monolithic velocity vector field *ui* valid everywhere obtained by a volume averaging procedure [53, 54], i.e.

$$
\mu\_l = \left(1 - \phi^s\right) \mu\_l^f + \phi^s \mu\_l^s,\tag{1.23}
$$

where *φ<sup>s</sup>* is an indicator function expressing the local solid volume fraction. Thus, we can write the stress in a mixture form as

$$
\sigma\_{lj} = \left(1 - \phi^s\right)\sigma\_{lj}^f + \phi^s \sigma\_{lj}^s. \tag{1.24}
$$

A fully Eulerian formulation is obtained after properly defining the fluid and solid Cauchy stress, with examples given in [16–18, 55].

# **1.3 Microscopic Approaches**

In this section we will discuss approaches used to perform interface-resolved simulations of the intrusions defining the microstructures and thus at the origin of the non-Newtonian behaviours described above. We will consider rigid and deformable particles, as well as two-fluid systems. Indeed, recent developments in computational power and efficient numerical algorithms have allowed the scientific community to numerically resolve the microstructure of suspensions in fluids.

# *1.3.1 Eulerian/Lagrangian Methods*

Eulerian/Lagrangian methods are often used to simulate suspension in fluids, and are also called immersed boundary methods (IBM). The main feature of this method is that the numerical grid does not need to conform to the geometry of the object, which is instead replaced by a body force distribution *f* that mimics the effect of the body on the fluid and restores the desired velocity boundary values on the immersed surfaces. To do that, two separate grids coexist, the Eulerian fixed grid where the flow is solved, and the Lagrangian grid representing the moving immersed boundary (see Fig. 1.3a); a singular force distribution at the Lagrangian positions is first determined and then applied to the flow equations in the Eulerian frame via a regularised Dirac delta function.

The primary advantage of the IB method is associated with the simplification of the grid generation task: indeed, grid complexity and quality are not significantly affected by the complexity of the geometry. The advantage of the IB method becomes eminently clear for flows with moving boundaries, where the process of generating a new grid at each time step is avoided, because the grid remains stationary and non-deforming. A drawback of this approach is that the grid lines are not aligned with the body surface, so in order to obtain the required resolution, higher number of grid points may be required. Many IBMs have been created so far, which differ in the way the immersed boundary force is computed [56–60]. The different methods are often grouped in two categories, continuous and direct forcing: in the first approach the forcing is incorporated into the continuous equations before discretisation, whereas in the second approach the forcing is introduced after

**Fig. 1.3** (**a**) Sketch of an immersed surface (grey) and of the Eulerian and Lagrangian grids used in the immersed boundary method. (**b**) Sketch of the volume of fluid method. (**c**) Sketch of the level-set method

the equations are discretised. The continuous forcing approach is very attractive for flows with immersed deforming boundaries, whereas the direct one is more commonly used to simulate rigid boundaries.

The original IB method was developed by Peskin [61] for the coupled simulation of blood flow and muscle contraction in a beating heart and is generally suitable for flows with immersed elastic boundaries. The IB is represented by a set of elastic fibres and the location of these fibres is tracked in a Lagrangian fashion by a collection of massless points moving with the local fluid velocity, i.e. the coordinate *X<sup>k</sup>* of the *k*-th Lagrangian point is governed by the equation

$$\frac{\partial X\_k}{\partial t} = \mathfrak{u}\left(X\_k, t\right), \tag{1.25}$$

where *u* is the local fluid velocity. The stress (denoted by *F*) is related to deformation of these elastic fibres by a constitutive law, such as the Hooke's law, and the effect of the IB on the surrounding fluid is captured by transmitting the fibre stress to the fluid through a localised forcing term in the momentum equations

$$f\left(\mathbf{x},t\right) = \sum\_{k} F\_{k}\left(t\right)\delta\left(\left|\mathbf{x} - \mathbf{X}\_{k}\right|\right),\tag{1.26}$$

where *δ* is the Dirac delta function. Because the location of the fibres does not generally coincide with the nodal points of the Cartesian grid, the forcing is distributed over a band of cells around each Lagrangian point and added on the momentum equations of the surrounding nodes. Thus, the sharp delta function is replaced by a smoother distribution function, denoted here by *d*, so that the forcing at any grid point *xi,j* is given by

$$f\left(\mathbf{x}\_{i,j},t\right) = \sum\_{k} F\_{k}\left(t\right) d\left(|\mathbf{x} - \mathbf{X}\_{k}|\right). \tag{1.27}$$

The fibre velocity in Eq. (1.25) is also obtained through the use of the same smooth function. The choice of the distribution function *d* is a key ingredient in this method, and several different distribution functions have been derived and employed in the past [61–64].

In the same spirit, Goldstein et al. [65] developed another model, called feedback forcing, to simulate the flow around rigid and moving bodies, where the effect of the body on the surrounding flow is modelled through a forcing term of the form

$$F\left(\mathbf{x},t\right) = \alpha \int\_{0}^{t} \left[\mathfrak{u}\left(\mathbf{x},\tau\right) - V\left(\mathbf{x},\tau\right)\right] d\tau + \beta \left[\mathfrak{u}\left(\mathbf{x},t\right) - V\left(\mathbf{x},t\right)\right],\qquad(1.28)$$

where the coefficients *α* and *β* are selected to best enforce the boundary condition at the immersed solid boundary, whose velocity is *V* . The above relation is a feedback to the velocity difference *u* − *V* and behaves in such a way to enforce *u* = *V* on the immersed boundary. Indeed, the first term on the right-hand side of the equation tends to annihilate the difference between *u* and *V* , whereas the second term can be interpreted as the resistance opposed by the surface element to assume a velocity *u* different from *V* . In an unsteady flow the magnitude of *α* must be large enough so that the restoring force can react with a frequency which is bigger than any frequency in the flow; however, big values of *α* and *β* render the forcing equation stiff and its time integration requires very small time steps. The method has been used to simulate flexible filaments as well [66, 67]. Even if the original intent behind Eq. (1.28) is to provide feedback control of the velocity near the surface, from a physical point of view it can also be interpreted as a damped oscillator [68] with frequency 1*/ (*2*π)* <sup>√</sup>*<sup>α</sup>* and damping coefficient <sup>−</sup>*β/* 2 √*α* .

### **Immersed Boundary Methods for Suspensions of Rigid Particles**

Uhlmann [56] proposed a computationally efficient numerical method based on the IBM to simulate suspension of rigid particles. Firstly, the surface of the immersed surface delimiting the body is discretised using *N* markers, called Lagrangian points *X*; note that, in general they do not correspond to the grid nodes *x*. The solution of the incompressible Navier–Stokes is based on the fractional-step method [69]. Indeed, a simple prediction step is first performed, without taking into account the immersed object. The obtained velocity field *u*∗ is then interpolated (with an interpolator operator I) onto the embedded geometry ,

$$U^\* = I\left(\mathfrak{u}^\*\right). \tag{1.29}$$

The values of *U*∗ are used to determine a distribution of singular forces along the boundary of that restore the prescribed boundary values *U* as

$$
\boldsymbol{F}^\* = \frac{\boldsymbol{U}^\Gamma - \boldsymbol{U}^\*}{\Delta t}. \tag{1.30}
$$

The force field defined over is then transformed into a body force distribution applied to the Eulerian grid using a convolution operator C

$$f^\* = \mathcal{C}\left(F^\*\right). \tag{1.31}$$

The momentum conservation equation is then solved again with the computed volume force field added as a source term and the time advancement step is completed with the usual solution of the pressure Poisson equation and the projection step where velocity and pressure are corrected to ensure mass conservation. Note that, this procedure is common to the most modern IB methods, and the step that defines the method is the way in which the operators I and C are built: in particular, the interpolation and spreading operations are based on the regularised Dirac delta function by Roma et al. [70], which extends over three grid cells in all coordinate directions.

The desired velocity *U* at a location *X* on the interface between the fluid and the immersed boundary is given by the rigid-body motion of the solid object:

$$\mathbf{U}^{\Gamma} = \mathfrak{u}^{c} + \mathfrak{w}^{c} \times \mathbf{r},\tag{1.32}$$

where *<sup>r</sup>* <sup>=</sup> *<sup>X</sup>* <sup>−</sup> *<sup>x</sup><sup>c</sup>* is the position vector relative to the particle centroid, *<sup>u</sup><sup>c</sup>* is the translational velocity of the particle centroid and *ω<sup>c</sup>* is the angular velocity of the particle. The translational and angular velocities of a particle are described by the Newton–Euler equations, which for a sphere reduce to

$$
\rho\_p \cdot \mathcal{V}\_p \frac{d\mathbf{u}^c}{dt} = \int\_{\mathcal{S}\mathcal{V}\_p} \mathbf{r} \cdot \mathbf{n} dA + \left(\rho\_p - \rho\_f\right) \cdot \mathcal{V}\_p \mathbf{g} - \mathcal{V}\_p \nabla p + \mathbf{F}\_c,\tag{1.33}
$$

and

$$I\_p \frac{d\mathfrak{a}^c}{dt} = \int\_{\delta\mathcal{V}\_p} \mathbf{r} \times (\boldsymbol{\pi} \cdot \mathbf{n}) \, dA + T\_c. \tag{1.34}$$

In the previous two equations, Eqs. (1.33) and (1.34), *ρp* is the density of the particle, <sup>V</sup>*<sup>p</sup>* its volume (4*/*3*πR*<sup>3</sup> for a sphere with radius *<sup>R</sup>*), *<sup>τ</sup>* the fluid stress tensor, *n* the outward-pointing unit normal at the surface *δ*V*<sup>p</sup>* of the particle, *g* the gravitational acceleration and *Ip* the moment of inertia of the particle (2*/*5*ρp*V*pR*<sup>2</sup> for a solid sphere). *F<sup>c</sup>* and *T <sup>c</sup>* represent the force and torque acting on the particle as a result of collisions and contact with other particles or solid walls.

Breugem [59] proposed two major improvements to the method discussed above. The first is the so-called *multidirect forcing scheme*. The use of a regularised Dirac delta function for the interpolation and spreading operations results in a diffuse distribution of the IBM force around the interface and because of that, the influence region of neighbouring Lagrangian points overlaps. Eulerian grid points in the overlap region are used to enforce the boundary value multiple times; thus, the resulting forcing is perturbed and the final distribution of the IBM force may not properly enforce the desired boundary condition. The remedy for this problem is to iteratively determine the IBM forces on the relevant Eulerian grid points such that they collectively enforce the desired boundary condition at the different Lagrangian points [71, 72]. The second improvement suggested by Breugem is the *inward retraction of Lagrangian grid*. The delta function of Roma et al. [70] has a width of three grid cells, and because of that the (outer) radius of the particle actually increases from *R* to *R* + 3*x/*2; this effect results in an increase of the particle drag force, which is partially balanced by an overall permeability of the particle due to a non-perfect boundary condition. As shown by Breugem, the first inaccuracy (increase in the effective radius) is stronger than the other one, and the suggested solution is to slightly retract the Lagrangian points from the surface towards the interior of the particle [73, 74].

In Eqs. (1.33) and (1.34), *F<sup>c</sup>* and *T <sup>c</sup>* are the force and torque acting on the particle as a result of collisions and contact with other particles or solid walls. A recent model for particle–particle and particle–wall interactions in interfaceresolved simulations of particle-laden flows is described by Costa et al. [75]. The model consists of three different interactions: long- and short-range hydrodynamic interactions, and solid–solid contact. The long-range interactions are directly obtained by the immersed boundary method, while the short-range ones are based on a lubrication model employed when the gap between particles is below the grid size, and is based on asymptotic expansions of the analytical solution for canonical lubrication interactions between spheres in the Stokes regime. Roughness effects can be accounted for as well. This correction is applied until the particles reach contact when a linear soft-sphere collision model is used. Note that, the approach described above can be extended to particles of different shapes [76], and that alternative collision models can be found in the literature, for example, Refs. [77, 78].

### **Front-Tracking Methods for Suspensions of Deformable Droplets**

The so-called front-tracking method is an evolution of the immersed boundary method used to simulate viscous, incompressible, immiscible two-fluid systems, first developed by Unverdi [79] and Tryggvason [80]. In such multiphase problems, the density and viscosity fields of each fluid remain constant, but they are discontinuous across the interface. In order to avoid numerical diffusion or oscillations problems close to the jump, these fluid properties are not advected directly, but instead a Lagrangian grid is created to describe the boundary between the different fluids, which is then moved with the fluid velocity. Therefore, at every time step it is necessary to reset the fluid properties and to do so, an indicator function F *(x)* is also introduced, equal to 1 inside one fluid and 0 in the other one. This function is constructed from the known position of the interface and is used to evaluate the proper values of density and viscosity at each grid point:

$$\rho\left(\mathbf{x}\right) = \rho\_{\text{l}} + \left(\rho\_{2} - \rho\_{\text{l}}\right)\mathcal{F}\left(\mathbf{x}\right) \text{ and } \mu\left(\mathbf{x}\right) = \mu\_{\text{l}} + \left(\mu\_{2} - \mu\_{\text{l}}\right)\mathcal{F}\left(\mathbf{x}\right), \qquad (1.35)$$

where the suffixes 1 and 2 indicate the two fluids. The jump in the indicator function carried by the interface is spread to the grid points nearest to the interface, in order to ensure that the fluid properties change smoothly across the interface. This generates a grid-gradient field which is zero everywhere except near the interface and has a finite thickness of the order of the mesh size. The spreading of the jump onto the grid is done in such a way that the volume integral of the gradient is conserved, i.e. if G *(x)* is the gradient of the indicator function evaluated at a stationary grid point *x*, and D is a distribution function, such as the one introduced by Peskin [81], then

$$\mathcal{G}\left(\mathbf{x}\right) = \sum\_{k} \mathcal{D}\left(\mathbf{x} - \mathbf{X}\_{k}\right) N\_{k} \Delta S\_{k},\tag{1.36}$$

where *N<sup>k</sup>* is the unit normal vector to the interface element of area *Sk* whose centroid is at *Xk*. The indicator function is found everywhere by solving the following Poisson equation:

$$
\nabla^2 I = \nabla \cdot \mathcal{G} \tag{1.37}
$$

where the right-hand side is computed by simple numerical differentiation.

Since the fluid velocities are computed on the fixed grid and the front moves with the fluid velocities, the velocity of the interface points must be found by interpolation; thus, similarly to Eq. (1.36), to interpolate the velocity on the interface Lagrangian points we use

$$U\_k = \sum\_{i} \mathcal{D}\left(\mathbf{x}\_i - \mathbf{X}\_k\right) \boldsymbol{\mu}\_i,\tag{1.38}$$

where the sum is now over the points on the stationary grid in the vicinity of the considered *k*-th Lagrangian point. Finally, the new position of the interface is found by solving a simple advection equation

$$\frac{dX\_k}{dt} = \mathcal{U}\_k.\tag{1.39}$$

As the front moves, it deforms and stretches, and the resolution along some parts of the interface can become inadequate or overly crowded. To maintain accuracy, either additional elements must be added when the separation among points becomes too large or points must be redistributed to maintain adequate resolution.

In those calculations where the surface tension *σ* is needed, the magnitude of the surface tension force is obtained from the local curvature K of the interface: *F<sup>k</sup>* = *σ*K*kNkSk*. This force is then distributed onto the grid as

$$f\left(\mathbf{x}\right) = \sum\_{k} \mathcal{D}\left(\mathbf{x} - X\_k\right) F\_k. \tag{1.40}$$

Note that, alternative ways have been proposed to calculate the surface tension without having to find the curvature, but only the local tangent defined by the Lagrangian points on the interface [80].

# *1.3.2 Eulerian/Eulerian Methods*

When dealing with moving and deformable boundaries, an alternative approach to the IB front-tracking methods previously discussed are the so-called front-capturing methods, which are fully Eulerian and handle topology changes automatically. A strong advantage of such methods is that they are easier to parallelised and typically achieve higher efficiency than their Lagrangian counterpart. However, interactions between approaching particles and droplets are difficult to control and may depend on the resolution adopted. Eulerian interface representations include essentially the volume of fluid (VOF) [82] and level-set (LS) [83–85] methods, their variants and combination. The VOF method defines different fluids with a discontinuous colour function, and its main advantage is an intrinsic mass conservation; however, it suffers from an inaccurate computation of the interface properties, such as normals and curvatures [86, 87]. Contrary to the VOF, the LS method prescribes the interface through a (Lipschitz-)continuous function which usually takes the form of the signed distance to the interface. Thus, normals and curvatures can be readily and accurately computed, while mass loss/gain may occur since the LS function has no volume information. Furthermore, this approach requires a procedure to reshape the LS into a distance function, i.e. the reinitialisation step. More recently, researchers have started to develop coupled VOF-LS methods [88] in order to overcome the disadvantages of both the techniques.

### **Volume of Fluids**

We introduce an indicator (or colour) function *H* to identify a given fluid so that *H* = 1 in the region occupied by fluid 1 and *H* = 0 in fluid 2. Considering that the fluid is transported with the flow velocity, we update *H* in the Eulerian framework by the following advection equation:

$$\frac{\partial H}{\partial t} + \mathfrak{u} \cdot \nabla H = 0,\tag{1.41}$$

where *u* is the velocity vector. The cell-averaged value of the indicator function is defined as the volume fraction or volume of fluid (VOF) function within a cell

$$
\phi = \frac{1}{\delta V} \int\_{\delta V} H dV.\tag{1.42}
$$

Thus, the VOF function assumes values 0 ≤ *φ* ≤ 1 (see Fig. 1.3b). Combining the two previous equations, we obtain the advection equation of the VOF function in the divergence form:

$$\frac{\partial \phi}{\partial t} + \nabla \cdot (\mu H) = \phi \nabla \cdot \mathbf{u}. \tag{1.43}$$

In a conventional VOF method, the interface separating different fluids is piecewise reconstructed for each cell by straight line segments, which are then used to calculate the numerical fluxes necessary to update the VOF function. This geometric reconstruction effectively eliminates the numerical diffusion that smears out the compactness of the transition layer of the interface. Different methodologies have been proposed to accurately recover the exact surface geometry from the discretised VOF function: the simple line interface calculation (SLIC) method [89], the piecewise linear interface calculation (PLIC) [90, 91], the latter being further modified by several authors [92–96]. Another technique is the tangent of hyperbola for interface capturing (THINC) method [97]: this avoids the explicit geometric reconstruction by using a continuous sigmoid function rather than the Heaviside function, thus allowing a completely algebraic description of the interface and enabling the computation of the numerical flux. An improvement was proposed by combining the original THINC method with the first-order upwind scheme in the so-called THINC/WLIC (THINC/weighted linear interface capturing) method [98]. Recently, the method has been further developed in the multi-dimensional THINC (MTHINC) method where the fully multi-dimensional hyperbolic tangent function is used to reconstruct the interface [99, 100]. The numerical fluxes can be directly evaluated by integrating the hyperbolic tangent function which also prevents the numerical diffusion that smears out the interface transition layer. Another advantage of the method is that the normal vector, curvature and approximate delta function can be directly obtained from the derivatives of the function, thus the standard smoothing or convolution techniques used in conventional VOF methods is not required.

The unit normal vector is defined as *n* = *m/*|∇*φ*|, being *m* the gradient of the VOF function, i.e. **m** = ∇*φ*, which can be computed using the usual Young's approach [90, 91]. For example, in the 2*D* case, first the values of the derivative at the four cell corners indexed as *<sup>i</sup>* <sup>±</sup> <sup>1</sup> <sup>2</sup> *, j* <sup>±</sup> <sup>1</sup> <sup>2</sup> are calculated by the VOF function on its surroundings, for example,

$$\begin{aligned} m\_{l+\frac{1}{2},j+\frac{1}{2}}^{\times} &= \frac{\phi\_{l+1,j} + \phi\_{l+1,j+1} - \phi\_{l,j} - \phi\_{l,j+1}}{\Delta x\_l + \Delta x\_{l+1}}, \\ m\_{l+\frac{1}{2},j+\frac{1}{2}}^{\times} &= \frac{\phi\_{l,j+1} + \phi\_{l+1,j+1} - \phi\_{l,j} - \phi\_{l+1,j}}{\Delta y\_j + \Delta y\_{j+1}}, \end{aligned} \tag{1.44}$$

and then averaged to find the cell-centre value

$$\begin{aligned} m\_{i,j}^{\chi} &= \left( m\_{i-\frac{1}{2},j-\frac{1}{2}}^{\chi} + m\_{i-\frac{1}{2},j+\frac{1}{2}}^{\chi} + m\_{i+\frac{1}{2},j-\frac{1}{2}}^{\chi} + m\_{i+\frac{1}{2},j+\frac{1}{2}}^{\chi} \right), \\ m\_{i,j}^{\chi} &= \left( m\_{i-\frac{1}{2},j-\frac{1}{2}}^{\chi} + m\_{i-\frac{1}{2},j+\frac{1}{2}}^{\chi} + m\_{i+\frac{1}{2},j-\frac{1}{2}}^{\chi} + m\_{i+\frac{1}{2},j+\frac{1}{2}}^{\chi} \right). \end{aligned} \tag{1.45}$$

The curvature *k* is then found by taking the divergence of the normal vector

$$k = -\nabla \cdot \mathbf{n}.\tag{1.46}$$

The surface tension force *f* = *σ knδ* can be computed using the continuum surface force (CSF) model [101], where the 1*D* approximate delta function *δ* is directly approximated by *δ* ≈ |∇*φ*|. Thus, we obtain

$$f = \sigma k \mathfrak{n} \delta \approx \sigma k \nabla \phi. \tag{1.47}$$

Finally, the mixture density and dynamic viscosity are simply averaged in terms of the VOF function (similarly to Eq. (1.35)):

$$
\rho = \rho\_1 \phi + \rho\_2 (1 - \phi) \qquad \text{and} \qquad \mu = \mu\_1 \phi + \mu\_2 (1 - \phi). \tag{1.48}
$$

Due to the non-uniformity of the density, the Poisson equation used to enforce a divergence-free velocity field becomes

$$\nabla \cdot \left( \frac{1}{\rho^{n+1}} \nabla p^{n+1} \right) = \frac{1}{\Delta t} \nabla \cdot \mathbf{u}^\*,\tag{1.49}$$

which is in an equation with variable coefficients. In order to utilise an efficient FFT-based pressure solver with constant coefficients [5, 102], we use the following splitting of the pressure term [103]:

$$\frac{1}{\rho^k} \nabla p^{n+1} \to \frac{1}{\rho\_0} \nabla p^{n+1} + \left(\frac{1}{\rho^{n+1}} - \frac{1}{\rho\_0}\right) \nabla \left(2p^n - p^{n-1}\right),\tag{1.50}$$

where *ρ*<sup>0</sup> is a constant density equal to the lowest density between the two phase. With this splitting, the Poisson equation can be rewritten as

$$\nabla \cdot \nabla p^{n+1} = \nabla \cdot \left[ \left( 1 - \frac{\rho\_0}{\rho^{n+1}} \right) \nabla \left( 2p^n - p^{n-1} \right) \right] + \frac{\rho\_0}{\Delta t} \nabla \cdot \mathbf{u}^\*. \tag{1.51}$$

Note that, the correction step of the fractional-step method needs to be modified accordingly and that the term 2*p<sup>n</sup>* <sup>−</sup> *<sup>p</sup>n*−<sup>1</sup> is a linear extrapolation consistent with using a second-order scheme to integrate in time the momentum equation, for example, Adams–Bashforth. The methodology can be extended to the Runge–Kutta schemes by following Ref. [104].

### **Level-Set Method**

The level-set function *φ* approximates the signed distance from the interface, thus *φ* = 0 denotes the interface and *φ >* 0 or *φ <* 0 the two different fluids separated by it (see Fig. 1.3c). The motion of the interface is governed by the following transport equation (formally similar to Eq. (1.41)):

$$\frac{\partial \phi}{\partial t} + \mathfrak{u} \cdot \nabla \phi = 0,\tag{1.52}$$

where *u* is the flow velocity field. The equation is closed and allows to solve the system of equations in a fully Eulerian fashion. Notwithstanding the formal simplicity of the equation, its numerical solution is challenging. The time integration is often performed by a three-stage total variation diminishing third-order Runge–Kutta scheme [105, 106], while the advection term of the equation is usually discretised with one of the following schemes: the high-order upstreamcentral (HOUC) scheme [107], the weighted essentially non-oscillatory (WENO) scheme [108], the semi-Lagrangian scheme [109] or the semi-jet scheme [105]. Quantitative comparisons of these schemes in various test cases can be found in Ref. [110]. As reported by Ge et al. [106], for flows involving moderate deformations, the HOUC scheme is usually sufficient and the most efficient, while for more complex flows, the WENO or the semi-Lagrangian/jet schemes combined with grid refinement should be used. Note that, Eq. (1.52) does not need to be solved over the entire computational domain, but only near the *φ* = 0 value, where the interface is located and the normal and curvature are needed. Thus, in the so-called narrow band approach [111, 112], the level set is computed and stored only around the interface, and fast computation and low memory usage may be achieved.

Although the level-set function is initialised to be a signed distance, this property is lost with time, causing numerical issues in the evaluation of the normal and curvature [83]. These issue requires an additional treatment in order to reshape the level-set function *φ* into a distance function, i.e. |∇*φ*| = 1. This is usually performed by converting it into a time-dependent Hamilton–Jacobi equation [83]

$$\frac{\partial \phi}{\partial T} + S(\phi\_0)(|\nabla \phi| - 1) = 0,\tag{1.53}$$

*φ*<sup>0</sup> being the level-set field before redistancing, T a pseudo-time and *S(φ*0*)* the mollified sign function of the original level set. When the steady state solution of the equation is reached, the zero level-set contour is unaltered, while the rest of the field has re-obtained the property of being a signed distance function. In practice, this equation is iterated only for few steps towards its steady state every certain number of time steps. Although an alternative approach exists, i.e. the fast marching method (FMM) [84], the reinitialisation procedure allows the use of high-order schemes and is easy to parallelised; thus, it has been a much more popular choice.

Finally, the unit normal vector *n* and the local mean curvature *κ* can be simply computed directly from the LS function as follows:

$$\mathfrak{n} = \frac{\nabla \phi}{|\nabla \phi|} \qquad \text{and} \qquad \kappa = -\nabla \cdot \mathfrak{n}, \tag{1.54}$$

and the body force **f** due to surface tension, included in the momentum equation of the Navier–Stokes equation, is expressed as

$$\mathbf{f} = \sigma \kappa \delta(\phi) \mathbf{n},\tag{1.55}$$

being *δ* the delta function and *σ* the surface tension. The density and viscosity vary across the fluid interface, and can be expressed in a mixture form (similarly to Eq. (1.35)) as

$$\rho = \rho\_1 H(\phi) + \rho\_2 \left(1 - H(\phi)\right) \quad \text{and} \quad \mu = \mu\_1 H(\phi) + \mu\_2 \left(1 - H(\phi)\right), \quad (1.56)$$

where *H (φ)* is the regularised Heaviside function defined such that it is zero inside one fluid and unity in the other one. In order to utilise an efficient FFT-based pressure solver with constant coefficients, the technique described by Dodd and Ferrante [5, 102] and discussed in the previous section can be easily used.

We conclude by noting that, although easy to implement, CSF effectively introduces an artificial spreading of the interface. An alternative approach is the ghost fluid method (GFM) [113–115], which provides a finite-difference discretisation of the gradient operator even if the stencil includes shocks.

### **Phase-Field Methods**

From a macroscopic point of view, the interface between two immiscible fluids can be usually assumed to be sharp, since its thickness is of the order of few nanometres. This aspect motivated the numerical methods discussed above, that, however, need to overcome the difficulties associated with evolving a discontinuous front through Eulerian fields. An alternative approach has been therefore proposed and successfully used for different applications, the so-called phase-field (or diffuse interface) method, where the interface is modelled as a thin layer across which the fluid properties continuously change avoiding discontinuous fields. The original idea can be attributed to Van der Waals who suggested that the interface thickness in a binary mixture is determined by the balance of counteracting weakly nonlocal terms in the free energy: these, on the one hand, provide diffusion and, on the other hand, a sharpening of the interface. Cahn and Hilliard [116] used this approach in the context of phase separation problems deriving an evolution equation for the concentration field. The thermodynamic consistency of the coupled Cahn– Hilliard/Navier–Stokes model [117] and its ability to handle topological changes are the main reasons that justify the increasing use of phase-field methods [118– 122]. The fundamental variable of phase-field models is a scalar *(xi,t)* which represents the relative quantity of one of the two phases, and whose extreme values, = ±1, correspond to the two pure fluids. Since the two fluids are immiscible, the flow domain is essentially divided in subdomains containing only one of the two phases, separated by an extremely thin interface where the two substances mix, −1 *<<* 1. The equation describing the evolution of the phases is

$$\frac{\partial \Phi}{\partial t} + \mathfrak{u} \cdot \nabla \Phi = \nabla \cdot \left( M \nabla G \right), \tag{1.57}$$

where *G* = *δ*F */δ* is the thermodynamic chemical potential defined as the functional derivative of the free energy F with respect to , and *M* is a proportionality constant called mobility. The left-hand side of Eq. (1.57) represents the convective transport of , whereas the right-hand side the driving force from the chemical potential, ensuring phase separation with the exception of the thin layer constituted by the interface. The mobility *M* has no intuitive physical meaning and is related to the time scale of the interface dynamics. In the limit of vanishing mobility, we recover pure convection neglecting the inner interface dynamics, whereas for infinite mobility the interface will reach equilibrium immediately and do not vary.

Following the seminal work of Cahn and Hilliard [116], it is possible to write the free energy in the following form:

$$\mathcal{F} = \int \frac{3}{2\sqrt{2}} \sigma \ell \left[ \frac{(\Phi^2 - 1)^2}{4\epsilon^2} + \frac{1}{2} |\nabla \Phi|^2 \right] dV,\tag{1.58}$$

with *σ* the surface tension and the interface thickness. The form of the first term between brackets in Eq. (1.58) guarantees the presence of two minima of F for the two pure fluids at = ±1. The gradient for the free energy drives therefore the system towards phase separation. However, the second term in the brackets is proportional to the square of |∇|. Hence, when reducing the interface thickness, the free energy increases because of the sharpening of the gradient of . The combination of these two terms tends to create an equilibrium interface of finite thickness, mimicking what originally assumed by Van der Waals (see, for example, [120] for more details). Using Eq. (1.58), the chemical potential *G* can be explicitly written as

$$G = 3\,\sigma \left(\Phi^3 - \Phi - \ell^2 \nabla^2 \Phi\right) / \left(2\sqrt{2}\,\ell\right),\tag{1.59}$$

which can be directly inserted in Eq. (1.57) to compute the phase field. Eq. (1.57) needs to be coupled with the Navier–Stokes equations (convection) which, in turn, are influenced by the phase field because of surface tension. The surface tension across the relatively thick interface between the two phases can be expressed in terms of the free energy (and chemical potential). From the consideration that, without dissipation, the total energy of a systems needs to be conserved it is possible to find the expression for the momentum forcing arising from the interface (surface tension) [123],

$$F\_s = -\delta \mathcal{F} / \delta \mathfrak{u} = -\phi \nabla G. \tag{1.60}$$

Considering Eqs.(1.57), (1.59) and (1.60), together with the incompressible Navier– Stokes system, we can write the complete Navier–Stokes/Cahn–Hilliard model for the immiscible fluid dynamics. In the following it is presented directly in dimensionless form:

$$
\nabla \cdot \mathbf{u} = 0
$$
 
$$
(1.61)
$$

$$\frac{\partial \mathfrak{u}}{\partial t} + \mathfrak{u} \cdot \nabla \mathfrak{u} = -\nabla p + \frac{1}{Re} \nabla^2 \mathfrak{u} + \frac{3}{2\sqrt{2}} \frac{1}{\text{WeCn}} \left( \Phi^3 - \Phi - \text{Cu}^2 \nabla^2 \Phi \right) \nabla \Phi \tag{1.62}$$

$$\frac{\partial \Phi}{\partial t} + \mathfrak{u} \cdot \nabla \Phi = \frac{M^\*}{\mathrm{Cn}} \nabla^2 \left[ \Phi^3 - \Phi - \mathrm{Cn}^2 \nabla^2 \Phi \right]. \tag{1.63}$$

Here, the equations have been made dimensionless using the typical flow scales, *L* for length, *U* for velocity, *μ* and *ρ* for dynamic viscosity and density. We have also introduced the Cahn number Cn = */L* as the dimensionless measure of the interface thickness, the Weber number, We <sup>=</sup> *ρU*2*L/σ* as the ratio between convection and surface tension effects, the dimensionless mobility *M*<sup>∗</sup> = 3 *Mσ/(*2 <sup>√</sup>2*UL*2*)* measuring the (dimensionless) mobility intensity and the Reynolds number *Re* = *ρUL/μ*. When using the phase-field method it is important to correctly set the Cahn Cn and mobility *M*∗ numbers. While the other dimensionless numbers used above are classical and depend only on the macroscopic properties of the system, the values of Cn and *M*∗ are set also by numerical considerations.

Since the physical thickness of the interface is usually on the nanometre scale, realistic values of the Cahn number for micro- and macro-scale applications may range between 10−<sup>3</sup> and 10−10. As the interface needs to be resolved using 4 <sup>÷</sup> <sup>5</sup> control points for accurate numerical simulations, the use of the realistically tiny values of the Cahn number is unpractical in simulations and an artificial thickening of the interface is necessary. Clearly, a trade-off is necessary when deciding the numerical interface thickness; to this end, it has been proved that the dynamics is not affected by the artificial thickening if the typical interface size is well below the smallest flow length scales. In other words, the Cahn number should be small, yet it can be much larger than the real one, for example, around 10−2; this is the so-called *sharp-interface limit*, [124]. In this limit, any further decrease of the Cahn number does not produce appreciable differences in the macroscopic dynamics.

More difficult is to precisely define or even measure the mobility *M* [123]. Moreover, it is crucial to determine how the mobility depends on the interface thickness because of its artificial magnification necessary in simulations. Recently, it has been found using matching asymptotic expansions that the optimal scaling to recover the sharp-interface limit is *<sup>M</sup>*<sup>∗</sup> <sup>=</sup> *α Ca*<sup>2</sup> (*<sup>α</sup>* an order one constant), see Ref. [120] for more details.

The main advantage of phase-field methods is in their strict thermodynamic derivation and the ability to automatically handle interface topology changes, the former opening also possibilities for direct simulations of phase changes. The main drawback is represented by the high computational costs since it is important to use at least 4 ÷ 5 points within the interface thickness which, in turn, needs to be smaller than the flow length scales. The Navier–Stokes/Cahn–Hilliard model has recently been adopted in several studies of immiscible fluids both in laminar and turbulent conditions, for example, Refs. [122, 125–127]. Besides the Cahn–Hilliard formulation, different phase-field methods have been proposed, based on alternative forms of the free energy, see, for example, [128] for compressible flows with phase change for cavitation problems. Similar methods exist also in the Lattice–Boltzmann framework, for more details the reader is referred to [129–131] and the references therein.

# *1.3.3 Other Approaches*

In many applications the suspended objects are much smaller than the smallest hydrodynamical scale; in these cases the so-called point particle method can be used, in which the particle are treated as Lagrangian points moving with the local flow velocity. When the volume fraction of the particles is small enough hydrodynamic interactions and collisions among particles can be effectively neglected, while for large values of the particle-to-fluid density ratio, i.e. for significant mass loads, the momentum exchange between the two phases is still significant and must be accounted for. In general, the motion of the particles is described by a set of ordinary differential equations for the particle velocity and position, with the velocity depending on the Stokes drag and the buoyancy force [132] but also added mass and history force can become important [133, 134]; note that, the Stokes drag coefficient is sometimes corrected with empirical correlations when the particle Reynolds number does not remain small. The interested readers are referred to Refs. [135–138] for more details. Other approaches have been proposed to model the feedback of the point particle on the flow, which should be independent of the resolutions used; details can be found in Refs. [139–142].

In the recent years, various alternative methods have been proposed which are modifications and/or combinations of the methods previously discussed. A commonly used approach is the force coupling method: this is a bridge between methods for Stokes flow and for low-Reynolds number conditions and is based on a low-order, finite force multipole representation of the effect the particles have on the surrounding fluid flow. In particular, the full Navier–Stokes equations are solved with an additional force density: the force monopole corresponds to the force the particle transmits to the fluid if it were replaced by a rigid particle of mass with the same density as the fluid, while the force dipole is a combination of a symmetric stresslet and a torque that acts on the fluid: the torque is set in a similar manner as the force monopole in terms of the angular momentum of the displaced fluid, while the stresslet is chosen to ensure that the average rate of strain within the particle is zero. The aim is to create a flow outside the particle that matches the actual flow within a short distance from the surface. Note that, the fluid inside the particle volume is an active part of the simulation and satisfies the same integral moments as a rigid particle. The interested reader is referred to the detailed review in Ref. [143] and the references therein.

Another technique is the so-called volume of fluid tensorial penalty method: the method is based on the one-fluid formulation modified for dealing with particle flows. In particular, the fluid and solid phases are treated as two fluids with different rheological property and distinguished by a phase function, such as the solid volume fraction. The solid particle behaviour is recovered in the Navier–Stokes equations by properly decomposing of the stress tensor: in particular, the stress tensor is rewritten in a way to separate the compression, tearing, shearing and rotation contributions [144]. The decomposition is used to separate the stress components operating in a viscous flow and to facilitate the implementation of a numerical penalty method. In the solid phase it is imposed that the local flow admits no shearing and tearing while preserving a constant rotation; these flow constraints are implicitly transmitted to the particle sub-domain as they are solved with the fluid motion. Note that, the previous viscous penalty method is formally equivalent to choosing a viscosity much bigger than the fluid one, similarly to what previously done when discussing plastic effects. However, on a discrete point of view, the two formulations are not equivalent: when the dynamic viscosity is used to impose the solid behaviour, a first-order convergence in space is usually obtained and a rasterisation effect can be produced at the particle–fluid interface; on the other hand, when the viscous stress tensor splitting is used, a more accurate fluid–solid interface is used, thus reducing the rasterisation effect and a second-order convergence can be achieved. To solve the unsteady Navier–Stokes equations together with the incompressibility and solid constraints, the augmented Lagrangian method can be applied. The interested readers are referred to Refs. [145–147].

# **1.4 Conclusions**

In this chapter, we have introduced present numerical methods for complex fluids. In the first part, we have discussed continuum approaches to viscoplastic and viscoelastic fluids, whereas interface-resolved methods for the simulations of suspensions of rigid and deformable particles, droplets, bubbles have been presented in the second part.

Continuum approaches require rheological models and constitutive equations for the additional stresses due to the fluid microstructure. These are derived theoretically or from experimental data; however, recently, the fast development of computational resources has enabled us to also resolve the microstructure with numerical simulations of the type described in the second part of this chapter. Although important results are continuously reported on the behaviour of viscoplastic and viscoelastic fluids, both in laminar and turbulent flows, these are somewhat restricted to relatively simple geometries. It is therefore relevant to explore the performance of the current numerical algorithms in more complex geometry and thus come closer to industrial applications and natural flows as, for example, avalanches. In addition, new and more sophisticated models are continuously proposed, also thanks to the development of new experimental techniques able to probe fluids subject to time varying shear or stress. From a numerical point of view, these will pose new challenges which we may not be able to tackle with the tools currently available. One such example is the isotropic kinematic hardening idea, based on the concept that the material yield stress builds up and evolves in time together with the flow field, where the steady state yield stress is determined via the back stress modulus (a new material parameter) and the deformation of microstructure (a hidden internal dimensionless evolution variable). Another important point for industrial applications is represented by the case where the fluid stress induces a breakage in a solid structure, i.e. hydraulic fracturing. Once a break-up occurs, the fragment behaves as a suspended particle that may contribute to additional solid breakage events. Despite its relevance in applications, the (automatic) numerical handling of this complex dynamics is still challenging.

Examples of simulations of multiphase and multicomponent flows are numerous and ever increasing, so that the approaches reported here constitute necessarily a quick and limited overview. Two directions are seen here as emerging and of potential interest. On one side the study of intrusions in a complex fluid, ubiquitous in applications, and, on the other side, the need to improve short-range nearcontact interactions among particles/droplets. Indeed, when particles/bubbles are significantly larger than the colloids or macromolecules providing plastic and elastic effects, it is reasonable to model the complex fluid as intrusions in a non-Newtonian matrix, thus combining the two types of approaches presented here, denoted as macro and microscopic.

Although the approaches discussed here fully resolve the surface of the suspended phase, rigid or deformable, solid or fluid, models are necessary for the shortrange interactions: (i) from a numerical point of view, the grid resolution inevitably becomes coarse when two objects approach each other, and (ii) microscale chemical and physical effects determine different interactions, for example, repulsive or attractive forces, slippage, Marangoni effects and depletion forces associated with the microstructure of the suspending phase. These become therefore truly multiscale problems, ranging from nanoscale surface interactions to millimetre/centimetre flows. Coupling of the numerical approaches mentioned here with nanoscale simulations, for example, molecular dynamics, is, therefore, a very active research area.

We would like to conclude by saying that we believe it is fundamental to always be aware of the limitations of the simulations one wishes to perform. This allows the researcher to design configurations and propose, though sometimes unphysical, numerical experiments that can unveil important physics, something which may not be possible in laboratory experiments.

# **References**


#### 1 Numerical Approaches to Complex Fluids 31


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 Basic Concepts of Stokes Flows**

**Christopher I. Trombley and Maria L. Ekiel-Je˙zewska**

# **2.1 Introduction**

Stokes flows have many applications in both physical theory and practice. For example, they have been used to describe dynamics of complex fluids in microfluidics, lab-on-chip technologies [1], medical applications [2, 3], design of innovative materials [4–6] and micro-devices—e.g. to carry drugs [7, 8] or act as fuel cells [9]—and in biological systems [10–15].

In this chapter, we discuss some fundamental properties of Stokes flows, namely: negligibility of inertial forces, reversibility and the minimum energy dissipation theorem. First we will briefly discuss how the neglecting of inertial forces simplifies the nonlinear Navier–Stokes equations to the linear Stokes equations. We then discuss two basic aspects of Stokes flows: reversibility and the minimum energy dissipation theorem. In order to bring out the nature of the three principles, we will demonstrate by example how these properties can be used to obtain conclusions about investigated fluid systems without laborious construction of analytical solutions. We then move beyond the Stokes approximation in various ways in order to see how the principles work in a general context. Finally, we conclude by discussing the logical structure of the principles as revealed by the examples considered.

C. I. Trombley · M. L. Ekiel-Jezewska ( ˙ -)

Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland e-mail: mekiel@ippt.gov.pl

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_2

# **2.2 Navier–Stokes and Stokes Equations**

# *2.2.1 Navier–Stokes Equations*

We start with the general Navier–Stokes equations for an incompressible fluid. These are [16–18]

$$
\rho \frac{\partial \mathbf{u}}{\partial t} + \rho (\mathbf{u} \cdot \nabla) \mathbf{u} = \mu \nabla^2 \mathbf{u} - \nabla p + \mathbf{F} \tag{2.1}
$$

$$\nabla \cdot \mathbf{u} = 0 \tag{2.2}$$

where *ρ* is the density of the fluid, **u** is the velocity field of a fluid, *μ* is the dynamic viscosity of the fluid, *p* is the fluid pressure field,1 and **F** captures the effects of external forces. The left-hand side of this equation is the inertial forces, that is, the acceleration of a fluid element with unit volume. The right-hand side is sum of the viscous and pressure forces, *<sup>μ</sup>*∇2**<sup>u</sup>** and <sup>∇</sup>*p*, respectively, exerted on surfaces of this fluid element, and any external body forces **F** acting on the fluid element. The second equation is based on the conservation of mass of a fluid element and achieves its simple form because of incompressibility of the fluid element.

We use non-dimensionalisation in order to capture the relative scale of the forces. Define *U* to be a characteristic velocity of the fluid and *L* to be a characteristic length scale. Other characteristic dimensional scales of the flow, for instance, a time scale *T* = *L/U*, can be defined implicitly from these scales. There is still some freedom when normalising pressure *p* and body forces **F**. We choose to normalise pressure by a characteristic viscous force per unit area and **F** by a characteristic viscous force per unit volume as in [18]. Using a star to denote non-dimensionalised objects, this results in the following definitions:

$$\begin{aligned} \mathbf{u}^\* &= \frac{\mathbf{u}}{U} \\ \nabla^\* &= L\nabla \\ \frac{\partial}{\partial t^\*} &= \frac{L}{U} \frac{\partial}{\partial t} \\ p^\* &= \frac{L}{\mu U} p \\ \mathbf{F}^\* &= \frac{L^2}{\mu U} \mathbf{F} \end{aligned} \tag{2.3}$$

<sup>1</sup>In the presence of a gravitational field, *p* is the so-called modified pressure, which takes into account also gravitational potential energy per unit fluid volume.

With the above characteristic dimensional scales, the inertial force per unit volume is estimated by *ρU*<sup>2</sup> *<sup>L</sup>* and the scale of the viscous force per unit volume is *μU <sup>L</sup>*<sup>2</sup> . The Reynolds number, *Re*, a non-dimensional number defined as the ratio of inertial and viscous forces in a fluid, takes the form

$$Re = \frac{(\rho U^2)/L}{(\mu U)/L^2} = \frac{\rho UL}{\mu} \tag{2.4}$$

The end result is the following non-dimensional version of the Navier–Stokes equation (2.1):

$$\operatorname{Re}\left(\frac{\partial \mathbf{u}^\*}{\partial t^\*} + \mathbf{u}^\* \cdot \nabla^\* \mathbf{u}^\*\right) = \nabla^{\*2} \mathbf{u}^\* - \nabla^\* p^\* + \mathbf{F}^\* \tag{2.5}$$

The left-hand side is the inertial force and the right-hand side is the viscous, pressure and body forces. Flows with the same *Re* are hydrodynamically similar [18].

A difficulty to using Eqs. (2.1) and (2.2) (or their non-dimensional form) in the analysis of fluids is that the inertial forces are nonlinear in **u**. In terms of forces, the so-called Stokes approximation can be understood as when the viscous and pressure forces dominate the inertial forces absolutely. The Reynolds number allows one to test the applicability of Stokes approximation to fluids. The Stokes approximation holds exactly in the limit as this ratio goes to zero [17–22]. For this reason, Stokes flows are often called low Reynolds number, non-inertial or viscous flows.

# *2.2.2 Stokes Flows*

Taking the limit *Re* → 0 in Eq. (2.5) one obtains the non-dimensional steady Stokes equations. In dimensional form, without external body forces, sources or sinks, they read

$$
\mu \nabla^2 \mathbf{u} - \nabla p = 0 \tag{2.6}
$$

$$\nabla \cdot \mathbf{u} = 0 \tag{2.7}$$

The first equation states the balance of forces in a non-accelerating fluid. The second equation is, as in Eq. (2.2), the conservation of mass for incompressible fluids.

The Stokes equations (2.6) and (2.7) must be combined with boundary conditions appropriate to the physical situation. The so-called stick or no-slip boundaries for rigid walls and at the surfaces of particles are important examples. Consider a surface *S* moving with local velocity **w**. It has no-slip boundary condition for the fluid velocity **u** if on *S* one has

$$\mathbf{u}(\mathbf{r}) = \mathbf{w}(\mathbf{r}) \qquad \text{ for } \mathbf{r} \in \mathcal{S} \tag{2.8}$$

There are many other important examples of boundary conditions, such as the boundaries for a free surface [18], but we will focus on the stick boundary conditions, which are sufficient for considering inertial forces, reversibility and the minimum energy dissipation. When considering reversibility especially, one must remember that boundaries can be time dependent. This means that the boundaries move, such as in the classical Taylor–Couette experiment involving a fluid between two rotating cylinders [23]. The Stokes equations (2.6) and (2.7) can also apply to unbounded flow problems by the selection of an appropriate boundary at infinity. For instance, a fluid can be constrained to be at rest at infinity, as in case of particles settling in a quiescent fluid.

Equations (2.6)–(2.7) are linear, so that any linear combination of solutions *(***u**1*, p*1*)* and *(***u**2*, p*2*)* is also a solution *(***u**<sup>1</sup> + **u**2*, p*<sup>1</sup> + *p*2*)*. Linearity allows for classes of solutions to be constructed. One example is the case of flow around a rigid sphere, where a complete set of elementary solutions to Eqs. (2.6) can be constructed, as done by Lamb [16]. In his families of elementary solutions, the pressure *p* is expanded in spherical harmonics and the velocity field **u** is written as an infinite series of solid harmonics. This concept is used in the multipole method of solving the Stokes equations for systems of particles moving in fluids [20, 24–30].

# **2.3 Reversibility of Fluid Flows**

Because Stokes equations (2.6)–(2.7) are steady and linear, the motion they predict is reversible in time. Mathematically, it means that the reversibility transformation of any solution, that is, *(***u***(***x***, t), p(***x***, t))* −→ −*(***u***(***x***, t), p(***x***, t))*, will also give a solution. This can be checked by simple algebraic manipulation of the governing equations. G.I. Taylor explained in his film *Low Reynolds Number Flows* [31] the physical meaning of reversibility—"low Reynolds number flows are reversible when the direction of motion of the boundaries which gave rise to the flow is reversed". Actually, a reversed fluid flow can result from reversing velocity of the boundary (equal to the fluid velocity at the surfaces of particles or walls) or from reversing directions of the external and the opposite hydrodynamic forces. In the following, we will show how reversibility allows to predict symmetries of fluid flows and motion of particles in fluids.

# *2.3.1 Examples of Reversibility*

One of the most dramatic presentations of reversibility is seen in the film mentioned above [31]. In this experiment, the volume between two transparent cylinders is filled with glycerine. Dyes are injected which form a compact coloured volume into the glycerine to help visualise the flow. The inner cylinder is rotated causing the dyes to stir and apparently mix. The inner cylinder is then rotated in the opposite direction and one sees the seemingly mixed fluids unstir themselves. This experiment demonstrates the difficulty of mixing low Reynolds number fluids, an important problem for microfluidics.

G.I. Taylor used this experiment to explain the concept of reversibility in the following way: "On reversal of the motion of the boundary, every particle retracts exactly the same path on its return journey as on the outward journey, and at every point its speed is the same fraction of the boundary speed as it was at the same point on its outward journey, so that when the boundary has returned to its original position every particle in the fluid has also done so and the original pattern of dye is reproduced" [32].

A very important consequence of reversibility in biology is that the ordinary swimming motion done by an idealised swimmer with a rigid tail could not produce forward motion in a non-inertial fluid, since any propulsion created by the swimmer when the tail moves left is exactly cancelled when it moves right, as demonstrated in [31]. This is a consequence of the "Scallop theorem" fundamental to the study of the locomotion of microscale organisms [11].

One can use reversibility to derive basic properties of solutions to the Stokes equations without finding the solutions explicitly. Take the case of a rotating, but not translating, sphere immersed in fluid governed by Eqs. (2.6) and (2.7). This situation is illustrated in Fig. 2.1. We might ask how much force such a sphere would feel. Here we mean the force exerted by the fluid on the sphere owing to stick boundary conditions on its surface. This hydrodynamic force needs to be balanced by the opposite external (non-hydrodynamic) force acting on the sphere. Through the use

**Fig. 2.1** A solid sphere rotating without translations near a solid wall. Reversibility implies that the sphere does not feel any external force perpendicular to the wall

of superposed reversibility and symmetry transformations, we can discover that in this situation the answer is that the sphere would not feel any hydrodynamic force in the direction perpendicular to the wall [33]. This can be proven by contradiction. Suppose *Fx* = 0. Notice that if we put the origin at the centre of the sphere, the system is symmetric for the transformation *y* −→ −*y*. This reflection reverses the rotation of the sphere, but leaves the *x*-component of the force the same. Now apply the reversibility transformation. The rotation is now reversed back to the original sense. The force vector should have the opposite direction. The result is that the sphere is at the same position, has the same physical rotation, but opposite *Fx* . This is a contradiction. This argument shows how reversibility and symmetry arguments can be combined to put strong restrictions on Stokes flow [33, 34].

We can also apply reversibility arguments again to the case of a sphere which moves under a constant gravitational force parallel to a solid wall. Applying the time reversal, we now reverse also the direction of the sphere velocity and force. By the same argument above, i.e. by combining the time reversal with the reflection with respect to the plane *y* = 0, there will be no velocity in the direction perpendicular to the wall; the sphere will keep translating parallel to the wall [34]. This reasoning applies to the study of sedimentation of a slowly moving particle of any shape and material symmetric with respect to reflection in the plane *y* = 0 [33, 34]. We have demonstrated that reversibility has observable consequences which do not require elaborate constructions.

# *2.3.2 Irreversible Trajectories in Stokes Flow*

Applying reversibility, one must take care that reversibility applies to time and forces. In particular, the paths that particles take need not be reversible in time even though the Stokes equation is reversible in time. As an example, consider the system shown in Fig. 2.2: two spheres of the same radii—one fixed and another one settling from above under gravity. For non-touching spheres, trajectories of the moving sphere centre are symmetric with respect to reflection in the plane *z* = 0. Under the time reversal, the gravitational force is reversed and the sphere centre moves backwards along the same trajectory. However, reversibility of the trajectories is broken when two spheres come so close to each other that their surfaces interact by direct forces, such as van der Waals attraction or mechanical reaction of rough surfaces at the contact [35–39]. The reason is that central direct forces are not symmetric with respect to superposition of the time reversal with reflection in the horizontal plane *z* = 0.

**Fig. 2.2** Experimentally observed trajectories of the centre of sphere settling under gravity in a silicon oil towards another fixed sphere of the same radius. Top: reprinted by permission from Ref. [36]. Copyright Kluwer Academic Publisher (2002). Bottom: reprinted with permission from Ref. [37]. Initally, the line of the sphere centres is inclined with respect to gravity. For a large inclination, the surfaces of the spheres are always separated by a fluid, and the trajectories are reversible. However, if the initial inclination is small enough, after some time the surfaces come into contact and the resulting direct forces break the reversibility of the trajectories

# **2.4 Minimum Energy Dissipation Theorem**

We will now give a "variational" view of Stokes flow. A solution to Stokes equations (2.6) and (2.7) is the unique divergence-free vector field that minimises the extensive energy dissipation rate (that is, the energy dissipated by the bulk of the fluid) [20]. In this section, we will state this minimum energy dissipation theorem precisely and sketch a proof. After that, we will apply it to derive "inclusion monotonicity", a principle about particles moving through Stokes flows.

# *2.4.1 Statement*

Consider a fluid filling a volume *V* with an impermeable boundary *∂V* = *S*. Let **u** be the velocity of a Stokes flow defined by Eqs. (2.6) and (2.7). Let **v** be a divergencefree vector field describing a flow in *V* with the same boundary conditions as **u**. The minimum energy dissipation theorem is

$$
\epsilon^{\mathfrak{u}} \le \epsilon^{\mathfrak{v}} \tag{2.9}
$$

where **<sup>u</sup>** is the extensive energy dissipation rate of the Stokes flow and **<sup>v</sup>** is the extensive energy dissipation rate of the other flow.

For an excellent discussion of how these relations for the change of internal energy over time are established physically, see section 3*.*4 of [18]. For now we will simply use the fact that for an incompressible fluid, the intensive energy dissipation rate (i.e. the energy dissipated per unit volume) is

$$\Phi^{\mathbf{u}} = 2\mu \mathbf{e}^{\mathbf{u}} : \mathbf{e}^{\mathbf{u}} \tag{2.10}$$

$$\Phi^\mathbf{v} = 2\mu \mathbf{e}^\mathbf{v} : \mathbf{e}^\mathbf{v} \tag{2.11}$$

where **e** is the rate of strain tensor for the Stokes flow **u** given component-wise as *e***u** *ij* <sup>=</sup> <sup>1</sup> <sup>2</sup> *( ∂ui ∂xj* <sup>+</sup> *∂uj ∂xi )* (and similarly for *e***<sup>v</sup>** *ij* ) and : is the double dot product. Integrating over *V* gives the extensive energy dissipation rates , so that

$$
\epsilon^{\mathbf{u}} = \int \Phi^{\mathbf{u}} dV \tag{2.12}
$$

$$
\epsilon^\mathbf{v} = \int \Phi^\mathbf{v} dV \tag{2.13}
$$

Having thus connected the energy dissipation rate to the mechanical properties of the flow, we can now discuss the proof of Eq. (2.9). Because the minimum energy dissipation theorem is proven and discussed in many textbooks, such as [20], we will only give a brief outline. One starts by demonstrating

$$\int (e\_{ij}^{\mathbf{v}} - e\_{ij}^{\mathbf{u}}) e\_{ij}^{\mathbf{u}} dV = 0 \tag{2.14}$$

from Green's theorem, the divergence theorem and Stokes equations (2.6) and (2.7). Then one subtracts Eq. (2.14) from the extensive energy dissipation rate for **v** and rearranges

$$2\mu \int e\_{ij}^{\mathbf{v}} e\_{ij}^{\mathbf{v}} dV = 2\mu \int \left( e\_{ij}^{\mathbf{v}} e\_{ij}^{\mathbf{v}} - (e\_{ij}^{\mathbf{v}} - e\_{ij}^{\mathbf{u}}) e\_{ij}^{\mathbf{u}} \right) dV \tag{2.15}$$

$$=2\mu \int \left(e\_{lj}^{\mathbf{u}}e\_{lj}^{\mathbf{u}} + (e\_{lj}^{\mathbf{v}} - e\_{lj}^{\mathbf{u}})e\_{lj}^{\mathbf{v}}\right)dV\tag{2.16}$$

$$=2\mu \int \left( e\_{lj}^{\mathbf{u}} e\_{lj}^{\mathbf{u}} + (e\_{lj}^{\mathbf{v}} - e\_{lj}^{\mathbf{u}}) e\_{lj}^{\mathbf{v}} - (e\_{lj}^{\mathbf{v}} - e\_{lj}^{\mathbf{u}}) e\_{lj}^{\mathbf{u}} \right) dV \tag{2.17}$$

$$=2\mu \int \left(e\_{lj}^{\mathbf{u}}e\_{lj}^{\mathbf{u}} + (e\_{lj}^{\mathbf{v}} - e\_{lj}^{\mathbf{u}})^2\right)dV\tag{2.18}$$

Which shows that 2*μ e***v** *ij <sup>e</sup>***<sup>v</sup>** *ij* <sup>−</sup> *<sup>e</sup>***<sup>u</sup>** *ij <sup>e</sup>***<sup>u</sup>** *ij dV* ≥ 0, which by Eqs. (2.10) and (2.11) is the same as

$$\int \left(\Phi\_{\mathbf{v}} - \Phi\_{\mathbf{u}}\right)dV \ge 0\tag{2.19}$$

By Eqs. (2.12) and (2.13), one sees that Eq. (2.19) is the same as Eq. (2.9), the minimum energy dissipation theorem.

# *2.4.2 An Application of the Minimum Energy Dissipation Theorem*

One advantage of variational principles such as the minimum energy dissipation theorem is that they can be used to describe the behaviour of general rigid bodies in a Stokes flow. We will give an example through the principle of "inclusion monotonicity". If one particle is large enough to completely contain another particle, then we can compare the magnitude of the so-called drag force resulting from a Stokes flow. Inclusion monotonicity follows from the minimum energy dissipation theorem, which we will now show in a manner following [20].

Let a rigid particle 1 take up a volume *V*<sup>1</sup> with surface *∂V*<sup>1</sup> = *S*<sup>1</sup> and compare with the flow around rigid particle 2 taking up a volume *V*<sup>2</sup> with a surface *∂V*<sup>2</sup> = *S*2. They are undergoing the same translational motion with velocity **w** without rotation. The fluid is described by Stokes equations (2.6) and (2.7). Further, the particles have no-slip boundary conditions on their surfaces. The forces the fluid flow exerts on these particles are

$$\mathbf{f}\_{\mathrm{l}} = \oint \sigma\_{\mathrm{l}} \cdot \mathbf{n}\_{\mathrm{l}} dS\_{\mathrm{l}} \tag{2.20}$$

$$\mathbf{f}\_2 = \oint \sigma\_2 \cdot \mathbf{n}\_2 dS\_2 \tag{2.21}$$

where *σ<sup>i</sup>* is the fluid stress tensor and **n***<sup>i</sup>* is the normal coming out of surface of particle *i*. The force of the fluid on the particle has the same magnitude but opposite direction.

$$\begin{aligned} \text{Inclusion monotonicity principle: If} \quad & V\_2 \subset V\_1, \quad \text{then} \quad \mathbf{f}\_2 \cdot \mathbf{w} \le \mathbf{f}\_1 \cdot \mathbf{w} \end{aligned} \tag{2.22}$$

The drag is the component of the fluid force on the particle in the direction of **w** [17]. One can see by dividing through by |**w**| that inclusion monotonicity relation Eq. (2.22) gives that the magnitude of the drag force on particle 1 is greater than magnitude of the drag force on particle 2. Proof of inclusion monotonicity principle Eq. (2.22) is illustrated in Fig. 2.3 and given below.

Let **u**<sup>1</sup> be the Stokes flow around the larger particle 1 and **u**<sup>2</sup> be the Stokes flow around the smaller particle 2. The energy dissipation rate per unit time in the fluid is proportional to the drag [20]

$$
\epsilon^{\mathbf{u}\_{\mathrm{l}}} = \mathbf{f}\_{\mathrm{l}} \cdot \mathbf{w} \tag{2.23}
$$

$$
\epsilon^{\mathbf{u}\_2} = \mathbf{f}\_2 \cdot \mathbf{w} \tag{2.24}
$$

**Fig. 2.3** Proof of inclusion monotonicity principle, illustrated. Three panels are drawn with particles in grey and fluid in white. In panel 1 and 2 particle 1 and particle 2 displace volumes such that *V*<sup>2</sup> ⊂ *V*1. The particles are moving with the same velocity **w**—shown with white tipped arrows—creating fluid velocity fields **u**<sup>1</sup> and **u**<sup>2</sup> shown with black tipped arrows. The last panel depicts a non-physical velocity field **v** which is equal to **u**<sup>1</sup> outside of *V*<sup>1</sup> and **w** in *V*<sup>1</sup> − *V*<sup>2</sup>

From the above equations it is easily seen that Eq. (2.22) is equivalent to **u**<sup>2</sup> <sup>≤</sup> **u**<sup>1</sup> . However because these are Stokes flows for different geometries, the energy dissipation rates cannot be directly compared. Therefore, we will construct a (nonphysical) vector field **v** in order to compare the energy dissipated by the motion of the two particles. Define **v** piecewise to be **u**<sup>1</sup> outside of *V*<sup>1</sup> and **v** = **w**, the translational velocity, inside of *V*<sup>1</sup> − *V*2. The vector field **v** is continuous because of the no-slip boundary condition. Now we will compare the energy dissipation rates of the different vector fields **u**2, **u**<sup>1</sup> and **v**.

We start by comparing **u**<sup>1</sup> and **v**. Because **v** is rigid body motion on *V*<sup>1</sup> − *V*2, **v** does not dissipate any energy there. Outside that set, **v** = **u**1. Therefore

$$
\epsilon^\mathbf{v} = \epsilon^{\mathbf{u}\_1} \tag{2.25}
$$

We now move on to the comparison between **u**<sup>2</sup> and **v**. By definition, outside of *V*1, **v** = **u**<sup>1</sup> which is a divergence-less vector field. On *V*<sup>1</sup> − *V*2, **v** is constant, so it is automatically divergence free there. Therefore **v** is a divergence-free vector field defined on the same volume of fluid as **u**2. Therefore, by minimum energy dissipation theorem we have that **v** cannot dissipate less energy than **u**2, i.e.

$$
\epsilon^{\mathbf{u}\_2} \le \epsilon^{\mathbf{v}} \tag{2.26}
$$

Substituting the formulas for the energy dissipation rates Eqs. (2.23) and (2.24) into the above gives inclusion monotonicity Eq. (2.22).

# **2.5 Limits of the Stokes Approximation**

# *2.5.1 Example of a System Where the Stokes Approximation Does Not Work*

The examination of the validity of the Stokes approximation is very revealing of the logical structure of the features of Stokes flow (negligibility of inertial forces, reversibility and the minimum energy dissipation theorem). The most dramatic setting to consider is the famous "Stokes paradox". This paradox arises in the uniform Stokes flow past an infinite rigid cylinder. Suppose that such a cylinder is translating through a fluid with constant non-zero velocity **u**<sup>0</sup> and has "no slip" on its surface. We suppose that very far from the cylinder, the fluid is at rest: **u***(***x***)* → 0 as **x** → ∞. Unfortunately, there is no solution to Eqs. (2.6) and (2.7) consistent with these boundary conditions [16, 19]. In a more general context, Stokes paradox occurs when a non-trivial two-dimensional solution of the Stokes equations (2.6) and (2.7) has no-slip boundary conditions on an object whose surface is a simple closed curve. The velocity is then necessarily logarithmically unbounded as one gets far from the object [18, 40]. More physically, Stokes paradox occurs because the energy dissipated by the cylinder does not decline far from the particle—in other words, due to the minimum dissipation principle.

### **Other Linear Flow Equations**

Because the Stokes approximation is not always justified and Navier–Stokes equations (2.1)–(2.2) are mathematically complicated, it is desirable to have other linear equation systems for fluid flow. We will very briefly give two such example systems in which Stokes paradox demonstrably does not occur but are still tractable: the Oseen and Brinkman equations.

We start with the well-known Oseen equations [19]. Let there be some constant background flow **u**<sup>∞</sup> imposed on the fluid. As mentioned before, the Navier–Stokes equations give that the inertial force have the nonlinear form *ρ(***u** · ∇*)***u**. If the characteristic velocity of the flow is much less than |**u**∞|, then the main component of inertia is the resistance of the fluid flow against the background flow. We can decompose the local flow as **<sup>u</sup>** <sup>=</sup> **<sup>u</sup>**<sup>∞</sup> <sup>+</sup> **<sup>u</sup>***<sup>O</sup>* and call **<sup>u</sup>***<sup>O</sup>* the Oseen flow. The inertial force has therefore *ρ (***u**<sup>∞</sup> <sup>+</sup> **<sup>u</sup>***O)* · ∇ *(***u***<sup>O</sup>* <sup>+</sup>**u**∞*)* <sup>=</sup> *ρ(***u**∞·∇*)***u***<sup>O</sup>* <sup>+</sup>*ρ(***u**∞·∇*)***u**∞+ *ρ(***u***<sup>O</sup>* · ∇*)***u**<sup>∞</sup> <sup>+</sup> *ρ(***u***<sup>O</sup>* · ∇*)***u***O*. Because **<sup>u</sup>**<sup>∞</sup> is constant, the middle terms are zero. Furthermore, we are looking for a linear equation, we assume that <sup>|</sup>**u***O*||**u**∞|. Therefore, we can neglect the nonlinear term. We use the term *ρ(***u**<sup>∞</sup> · ∇*)***u***<sup>O</sup>* to incorporate inertial forces into linear equations. The equations resulting from the addition of this term to Eq. (2.6) are termed the Oseen equations [41, 42]. The Oseen equations for a steady, incompressible fluid have the form

$$
\rho(\mathbf{u}\_{\infty} \cdot \nabla)\mathbf{u}^{O} = \mu \nabla^{2}\mathbf{u}^{O} - \nabla p^{O} \tag{2.27}
$$

$$
\nabla \cdot \mathbf{u}^{O} = 0
$$

where *p<sup>O</sup>* is the pressure associated with such a flow.

There are considerations other than inertial forces that one can take into account for fluid motion in systems described by linear equations. For example, fluid flows in porous media can be described by linear equations. The solid skeleton causes an additional hydrodynamic resistance, which in the Brinkman model of porous media is introduced as a new term. This results in the following equations for fluid velocity **u***<sup>B</sup>* and fluid pressure *pB*:

$$
\mu \nabla^2 \mathbf{u}^B - \nabla p^B = c \mathbf{u}^B \tag{2.28}
$$

$$
\nabla \cdot \mathbf{u}^B = 0
$$

where *c* is the ratio of the fluid dynamic viscosity and the permeability of the porous media.

# *2.5.2 Departures from Reversibility Caused by Inertia*

The Stokes approximation—which involves the deliberate neglecting of inertia cannot be applied to systems in which inertial forces materially contribute to motion. This can be seen in flow visualisation. In symmetric environments, reversibility implies that the flow will also be symmetric [33]. For non-Stokes flows (i.e. *Re* 0), the symmetry in the flow lines breaks down [43]. This departure from reversibility grows with the Reynolds number [44].

Reversible flow was shown in Sect. 2.3.1 to have the interesting property that a spherical particle under an external force parallel to a wall would not experience any lateral motion. In an inertial flow, however, a spherical particle tends to drift away from walls, breaking the reversibility, and causing the "tubular pinch effect" [44], with a different pattern of fluid streamlines.

In the analysis given in Sect. 2.3.1, a sphere rotating in a non-inertial fluid was considered. This leads to a reversible, time symmetric fluid flow [33]. However, a sphere (or a cylinder) which experiences inertial effects while rotating will create an irreversible flow. The inertial forces will cause the cylinder to irreversibly create vortices, which then interact with the rotation of the cylinder in a complex, non-time symmetric way, as shown in Ref. [45].

# *2.5.3 Accelerating Fluid Example*

Even when the Stokes approximation is mathematically coherent, one should think with care how to interpret their results. As an illustrative example, consider a fluid contained within an infinite, impenetrable cylinder with radius *R* rotating with angular velocity and with no-slip boundary conditions at its surface. The explicit solution of the Stokes equations has the form,

$$\mathbf{u}^S = \Omega r \hat{\theta} \tag{2.29}$$

$$p^S = c \tag{2.30}$$

where *c* is a constant and ˆ *θ* is the unit vector in the azimuthal direction of the corresponding cylindrical coordinates. The flow velocity is, effectively, rigid body rotation. Pressure is constant in space and therefore there are clearly no centrifugal forces in the radial direction.

Moving on to consider the Navier–Stokes equations, we find that the solution becomes

$$\mathbf{u}^{NS} = \Omega r \hat{\theta} \tag{2.31}$$

$$p^{NS} = \frac{1}{2}\rho \,\Omega^2 r^2 + c \tag{2.32}$$

Like the Stokes case, the fluid is undergoing rigid body rotation. But now a centrifugal force appears in the form of a pressure gradient in the radial direction. The Stokes solution *(***u***S, pS)* has no forces in the radial direction, but in practice we would expect a centrifugal force in the presence of rotation. In the steady Navier–Stokes case, the centrifugal force per unit fluid volume is pressure gradient. Therefore, the centrifugal term *pNS* is much more realistic than the constant term *pS*.

# **2.6 Conclusions**

Summarising, various properties essential to the understanding of Stokes flow, have been discussed, including negligibility of inertial forces, reversibility and the minimum energy dissipation theorem. Illustrative examples related to these properties have been provided: irreversible trajectories in Stokes flow, inertial terms for the fluid flow generated by a rotating cylinder, force on a rotating sphere close to a solid plane wall, Stokes paradox, energy dissipation for particles of different shapes. The meaning and the limits of the Stokes approximation have been discussed in the context of more general equations.

We will conclude with some analysis of the logical relationship between the assumption of the negligibility of inertial forces, the assumption of reversibility and the minimum energy dissipation theorem.

The assumption that a flow minimises the energy dissipation rate entails that the flow satisfies the Stokes equations. This means that minimum energy dissipation implies both reversibility and the negligibility of inertial forces. Stated contrapositively, irreversible flows or flows with inertial forces dissipate more energy than Stokes flows.

Furthermore, reversibility implies the negligibility of inertial forces. This is equivalent to saying that the presence of inertial forces implies irreversibility. Any term proportional to *ρ(***u**·∇*)***u**, the inertial force term in the Navier–Stokes equation, will make a flow irreversible.

However, neither negligibility of inertial forces nor reversibility does not imply the minimum energy dissipation theorem. Like the Stokes equations (2.6) and (2.7), the Brinkman equations (2.28) and (2.29) are reversible and do not contain inertial terms. But one can now simply apply the proof in Sect. 2.4.1 by substituting **u***<sup>B</sup>* for the general solenoidal vector field **v** to find that the Brinkman flow dissipates more energy than the Stokes one. This also shows that, counterintuitively, reversibility is not sufficient to achieve the minimum energy dissipation achieved by Stokes flows.

**Acknowledgements** This work was supported in part by Narodowe Centrum Nauki under grant No. 2014/15/B/ST8/04359. We acknowledge scientific benefits from COST Action MP1305.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 Mesoscopic Approach to Nematic Fluids**

**Žiga Kos, Jure Aplinc, Urban Mur, and Miha Ravnik**

# **3.1 Introduction to Nematic Fluids**

Nematic liquid crystalline fluids are complex anisotropic fluids characterised by internal orientational order of its constituent building blocks [1, 2], which ranges in scales from molecules, macromolecules like DNA, to colloidal rods or platelets. Typically, the orientational order emerges at some temperature or concentration range of building blocks as a result of the geometrical shapes of prolate or oblate building blocks. More recently, nematic order emerged also as an important characteristic of various active fluids, i.e. fluids that can self-propel. Nematic fluids are inherently soft materials, with the orientational order responding as an effective elastic medium to external perturbations, like surfaces or electromagnetic fields. And it is this soft and—optically or structure wise—strong response to external fields which makes nematic fluids potent materials in various applications, including in the fields of optics, photonics, and sensors. The broadest range of applications and experiments with nematic fluids is as at the scales of multiple building elements (which, for example, for molecular nematics, is in the micrometre regime), where mesoscopic approaches prove to be the strongest to describe the systems, as compared to molecular and effective molecular approaches [3, 4], which are used at smaller scales. In view of this, this chapter will present mesoscopic approach to nematic fluids, based on continuum description of the nematic mechanisms and phenomena.

M. Ravnik (-) Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia

J. Stefan Institute, Ljubljana, Slovenia e-mail: miha.ravnik@fmf.uni-lj.si

Ž. Kos · J. Aplinc · U. Mur

Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_3

Nematic orientational order is characterised at the mesoscopic scale primarily with the average orientation of the building blocks (also called nematogens), called the director **n** with apolar **n** → −**n** symmetry. This symmetry can be seen from the basic model system of nematic fluids, i.e. fluid of rods, where opposite orientations of a rod are equivalent. The fluctuations and possible asymmetry in the fluctuations of the individual building blocks open additional degrees of freedom given by the nematic degree of order *S* and biaxiality which then are embedded in the nematic tensor order parameter Q, which further will be defined and explained in this chapter. Overall, the full anisotropic configuration of nematic ordering can vary in three spatial dimensions and can be also time dependent [1]. Equilibrium nematic configurations correspond to a minimum of the free energy. Uniform nematic ordering can be broken by electromagnetic fields or interaction with the boundaries, such as cell or particle surfaces. To satisfy these constraints, often regions of a frustrated orientational order—topological defects—emerge. In a defect, the singularity in the director field is accompanied by a reduction of *S*, thus effectively melting the nematic into the isotropic state. The shape of topological defects ranges from point defects to line defects and defect walls [2]. Although the topology of a local nematic field is given by the constraints at the boundary, the shape and the metastability of the nematic structures are dictated by the free energy.

Out of the equilibrium nematic alignment is strongly coupled to the velocity field. There are three main effects of the nematic ordering to the velocity field: (1) the rotation of nematic molecules induces material flow, which is known as backflow, (2) even at a fixed nematic configuration (i.e. fixed Q) the fluid viscosity is anisotropic, (3) in active nematics active force dipoles drive the fluid flow. The nematic ordering is affected by the fluid flow through the advection process and the tumbling/aligning dynamics. The coupling between fluid flow and orientational order shows remarkable complexity and provides another aspect to the low Reynolds number fluid mechanics. Out of the equilibrium dynamics can be understood as a competition between effective nematic elasticity, which drives the system towards the equilibrium, and the velocity field and other time modulating fields that promote further deformations of the orientational order. The solutions of such competition range from low Reynolds number turbulence to complex mutually interacting structures in both velocity and Q-tensor fields. Several approaches to the coupled behaviour between fluid flow and nematic ordering were developed, among the most commonly used are Beris–Edwards [5] and Qian–Sheng [6] model of nematodynamics. Velocity effects on the Q-tensor are introduced through the generalised advection terms that compete with the molecular forces that promote the relaxation to the minimum of the free energy. The effect of the orientational order on the fluid flow is given by the stress tensor that is included in the Navier– Stokes equation. The above-mentioned models are based on full Q-tensor, but rather extensively also models based on only director dynamics are used, such as Ericksen– Leslie–Parodi model [1].

Orientational order and material flow of nematic fluids can be shaped into complex microfluidic structures, as a result of combined soft response to the external electromagnetic fields, confinement, and pressure boundary conditions, coupled with the internal effective elasticity and possibly even intrinsic activity [1, 2]. Fascinating field structures in nematic fluids are revealed by theory and experiments, for example, in the context of assisted assembly of colloidal crystals [7], study of complex topological states [8, 9], and sensing applications [10]. Studied systems out of the equilibrium include quench transitions [11], back flow effects [12, 13], and nematic flow in various Poiseuille geometries [14, 15]. Dependence of the nematic viscosity on the director orientation can be used in microfluidic circuits to control the direction of flow and transport of material with selected recent works including defect line assisted transport of colloidal cargo [16] and nematic fluid resistance tunable by electric field [17]. Optical sensors are developed in Ref. [18]. A lot of attention has been also given to the subject of active nematics, both from the experimental and theoretical point of view [19]. The rich variety of structures in mutually coupled fields, each with its own intrinsic symmetry, calls for a deeper understanding of their interactions and potential applications.

Defects in liquid crystals can form regular or irregular structures, depending on the type of confinement, inserted colloidal particles, external fields, and flow [20– 23]. Confinement and the surface anchoring can impose and affect defects in liquid crystal. If the confinement has a regular structure, the transition from regular to irregular structures is controllable [20] and the created system can even have a memory effect. Colloidal particles inserted in the liquid crystal introduce the topological defects on their surface. These defects attract each other, so if the colloids have an appropriate design—such as geometry and surface imposed anchoring this can cause self-assembly of colloidal crystals [7]. Even the passive or active flow itself can cause the reorientation of the director via backflow mechanism, at certain circumstances this gives birth to the structures in the director field [24]. A recently developed system for studying topological defects in passive materials uses confined nematics and nematic colloids [11, 20, 25, 26] where defects of high complexity can be realised ranging from topological defect knots [27], handlebody topological colloids [28], chiral nematic solitons of torons [22] and hopfions [29], quasicrystalline colloidal tilings [30], and droplets with holes [31]. The joint feature of these passive complex defect structures is that defects in 3D generally become delocalised and emerge in the form of topological loops or even networks, called nematic braids. In parallel to their observation, there was also a major development of experimental and theoretical, especially topological, methods for characterisation and control of these advanced defects [26, 32–34].

Complex materials based on nematic fluids are recently attracting a lot of attention, including because of novel ways to control birefringence and the possibility to form complex topological structures. Notably, these specific systems include chiral nematics [22, 27, 35–38] and lyotropic liquid crystals [39–42] which will not be explained in this chapter due to space limitations. Additionally, active matter is a major growing field of science, where nematicity is emerging as an important characteristic of multiple systems. Finally, the goal of this chapter is to give basic introduction to mesoscopic approach to nematic fluids and show some selected exciting fields of development in nematic fluids, such as nematic colloids, topology, and microfluidics.

# **3.2 Nematic Order Parameters**

Nematic liquid crystal fluids consist typically of rod-like or disk-like molecules (building blocks) with no long-range positional order. Due to their anisotropic shape, molecules exhibit orientational order and tend to align along some common direction, usually referred as director **n** with both directions **n** and −**n** being equivalent. The director is a vectorial-like order parameter, which corresponds to the time or ensemble average of molecular orientations **u** (see Fig. 3.1).

The director **n** bears no information about the degree of orientational order, i.e. the degree of fluctuations of molecular orientations **u**; therefore, nematic degree of order (scalar order parameter) *S* is introduced. Nematic molecules in thermodynamic equilibrium assume some direction according to some probability distribution *ρ(***u***)*. We want to characterise the alignment by one parameter, not the full distribution function *ρ(***u***)*, which can be quite general. Without loss of generality one can choose *z* axis along **n** and characterise spatial directions of **u** by azimuthal angles *φ* and polar angles *θ*. The first idea would be to use the average **a** · **n** = cos *θ*, but this vanishes because the molecules have no distinction between head and tail. The first non-trivial moment is the quadrupole, which is then used to determine the nematic degree of order *S* as:

$$S = \left< P\_2(\cos \theta) \right> = 2\pi \int\_0^\pi P\_2(\cos \theta) \rho(\theta) \sin(\theta) d\theta,\tag{3.1}$$

where *P*<sup>2</sup> is the associated Legendre polynomial of the second order and · average over all molecular orientations. The values of *S* lie on the interval −1 <sup>2</sup> *,* 1 . When all the molecules are perfectly aligned with the director, the nematic degree of order is *S* = 1, whereas *S* = 0 corresponds to the isotropic phase, in which the molecules are oriented randomly, and *<sup>S</sup>* = −<sup>1</sup> <sup>2</sup> represents the state where all the molecules are aligned perpendicular to the director.

**Fig. 3.1** Nematic ordering of rod-like molecules along the director **n**. *θ* is the tilt angle of individual molecules with respect to director

The full orientational order in liquid crystals is described by the tensor order parameter *Qij* that contains both degree of order *S* and director **n** and also possible biaxiality *P*. *Qij* reads

$$Q\_{lj} = \frac{S}{2} \left( 3n\_l n\_j - \delta\_{lj} \right) + \frac{P}{2} \left( e\_l^{(1)} e\_j^{(1)} - e\_l^{(2)} e\_j^{(2)} \right),\tag{3.2}$$

where **e***(*1*)* is the secondary director (perpendicular to **n**) that characterises the biaxial ordering, and **<sup>e</sup>***(*2*)* <sup>=</sup> **<sup>n</sup>**×**e***(*1*)* . Values of *P* are in the interval −3 2 *,* 3 2 , where *<sup>P</sup>* <sup>=</sup> 0 characterises uniaxial ordering and <sup>|</sup>*P*| = <sup>3</sup> <sup>2</sup> corresponds to the perfect ordering along the secondary director **e***(*1*)* . Order parameter tensor *Qij* is a real, symmetric, and traceless tensor. It has five degrees of freedom: nematic degree of order *S*, biaxiality *P*, orientation of the director **n** (two angles), and orientation of the secondary director **e***(*1*)* relative to the director (one angle). Order parameter tensor has three eigenvalues, namely *<sup>S</sup>*, <sup>−</sup><sup>1</sup> <sup>2</sup> *(S*+*P )* and <sup>−</sup><sup>1</sup> <sup>2</sup> *(S*−*P )*, with the corresponding eigenvectors **n**, **e***(*1*)* , and **e***(*2*)* .

# **3.3 Landau–de Gennes Free Energy Approach**

A strong approach at the mesoscopic level to characterise the equilibrium properties of nematic fluids is to use the Landau–de Gennes free energy volume density *f* , which can be written as:

$$f = f\_{\rm NI} + f\_{\rm E}.\tag{3.3}$$

The model consists of two main bulk free energy density contributions: first contribution *fNI* describes the phase transition from nematic to isotropic mesophase and second contribution *fE* accounts for the spatial elastic deformations of tensor order parameter *Qij* .

# *3.3.1 Landau Theory of Nematic Phase Transition*

The stability of nematic mesophases in most nematic fluids depends either on temperature or concentration of the building blocks. In thermotropic nematic liquid crystals, temperature drives the phase transition between the nematic phase with orientational order and isotropic phase with no long-range orientational order, with consequently the order parameter *Qij* abruptly dropping to zero. Phenomenological Landau theory is a mean field theory and a well-established approach to model phase transitions. The Gibbs free energy is expanded in vicinity of the transition with respect to the scalar invariants of the order parameter tensor *Qij* up to the fourth order [1]. Expansion reads

$$f\_{\rm NI} = \frac{1}{2} A(T) \mathcal{Q}\_{lj} \mathcal{Q}\_{jl} + \frac{1}{3} \mathcal{B} \mathcal{Q}\_{lj} \mathcal{Q}\_{jk} \mathcal{Q}\_{kl} + \frac{1}{4} C (\mathcal{Q}\_{lj} \mathcal{Q}\_{jl})^2,\tag{3.4}$$

where *A*, *B*, and *C* are material parameters and summation over repeated indices is assumed. Parameter *A* = *a(T* − *T* <sup>∗</sup>*)* contains temperature dependence which governs the nematic to isotropic transition [2]. In nematic phase, below the temperature *T* ∗, both *A* and *B* are negative, but *C* must be positive to ensure that the free energy density functional is bounded from below. Typical values for molecular nematic liquid crystal are <sup>≈</sup>106 <sup>J</sup>*/*m3. Free energy functional can be rewritten only with *S*, assuming homogeneous **n** and no biaxiality, as:

$$f\_{\rm NI}^{SI} = \frac{3}{4}a(T - T^\*)S^2 + \frac{1}{4}BS^3 + \frac{9}{16}CS^4,\tag{3.5}$$

where now the free energy functional exhibits dependence shown in Fig. 3.2. First term drives the transition, second breaks the symmetry of *S*, and the third bounds the *f*NI from below. The minimisation of the free energy gives the equilibrium nematic degree of order *S*eq

$$\mathcal{L}\_{\text{eq}} = \frac{1}{2} \left( -\frac{B}{\Im C} + \sqrt{\left(\frac{B}{\Im C}\right)^2 - \frac{8A(T)}{\Im C}} \right) \tag{3.6}$$

that holds for *T <T*<sup>c</sup> and homogeneous nematic under no external field.

**Fig. 3.2** Free energy density *f SI* NI as a function of the nematic degree of order *S* for some typical temperatures. Minimum of the free energy density changes with variation of temperature. For *T >T*<sup>c</sup> free energy has stable minimum at *S* = 0 (isotropic phase), but for *T <T*c, the minimum corresponds to *S* = 0 (nematic phase). At temperature *Tc*, we have coexistence of both phases and transition is of first order. Below the super-cooling temperature *T* ∗ the isotropic phase is unstable. *T* ∗∗ represents super-heating temperature of nematic phase

# *3.3.2 Elastic Free Energy*

Organisation of nematic into a uniform and homogeneous pattern is energetically preferred. However, such organisation is usually not compatible with boundary conditions and external fields. While subjected to spatial variations of orientational ordering, nematic material effectively acts as an elastic medium, where the elastic deformation can be decomposed into three basic deformation modes, splay, twist, and bend, presented in Fig. 3.3. The elastic free energy is an expansion in small gradients of the order parameter tensor **Q** that penalise nematic distortions from the uniform configuration. The terms are second order in derivatives because first-order terms are disallowed by the symmetry of an achiral nematic [2]. Free energy density reads

$$f\_{\rm E} = \frac{1}{2} L\_1 \frac{\partial \mathcal{Q}\_{ij}}{\partial \mathbf{x}\_k} \frac{\partial \mathcal{Q}\_{ij}}{\partial \mathbf{x}\_k} + \frac{1}{2} L\_2 \frac{\partial \mathcal{Q}\_{ij}}{\partial \mathbf{x}\_j} \frac{\partial \mathcal{Q}\_{lk}}{\partial \mathbf{x}\_k} + \frac{1}{2} L\_3 \mathcal{Q}\_{lj} \frac{\partial \mathcal{Q}\_{kl}}{\partial \mathbf{x}\_l} \frac{\partial \mathcal{Q}\_{kl}}{\partial \mathbf{x}\_j},\tag{3.7}$$

where *L*1, *L*2, and *L*<sup>3</sup> are tensorial elastic constants, *xi* are Cartesian coordinates, and summation over repeated indices is assumed. Three elastic constants are introduced to quantify all three basic elastic modes. More third-order terms in **Q** are possible in Eq. (3.7), but the choice of three terms is sufficient for matching into three Frank elastic constants. If one assumes uniaxial approximation of the order parameter tensor **Q** (*S* = const. and *P* = 0), the free energy can be rewritten into the Frank–Oseen free energy density, which is expressed in terms of director **n** and its derivatives [43, 44]:

$$f\_{\rm E}^{\rm FO} = \frac{1}{2}K\_1(\nabla \cdot \mathbf{n})^2 + \frac{1}{2}K\_2(\mathbf{n} \cdot (\nabla \times \mathbf{n}))^2 + \frac{1}{2}K\_3(\mathbf{n} \times (\nabla \times \mathbf{n}))^2,\tag{3.8}$$

where the terms directly account for splay, twist, and bend deformation modes of the nematic. By comparing both expansions [Eqs. (3.7) and (3.8)] one can get the mapping between tensorial constants *Li* and Frank elastic constants *Ki*, which are usually measured in experiments [45]:

$$K\_{\rm I} = \frac{9S^2}{4}(2L\_{\rm I} + L\_{\rm 2} - L\_3S),\tag{3.9}$$

**Fig. 3.3** Basic nematic elastic deformation modes: (**a**) splay, (**b**) twist, and (**c**) bend

$$K\_2 = \frac{9S^2}{4}(2L\_1 - L\_3S),\tag{3.10}$$

$$K\_3 = \frac{9S^2}{4}(2L\_1 + L\_2 + 2L\_3S). \tag{3.11}$$

Usually, a single elastic constant approximation is used, which sets *L*<sup>1</sup> = *L*, *L*<sup>2</sup> = *L*<sup>3</sup> = 0, and *K*<sup>1</sup> = *K*<sup>2</sup> = *K*<sup>3</sup> = *K*. The elastic free energy densities than reduce to

$$\mathbf{f}\_{\rm E} = \frac{1}{2} L \frac{\partial \mathcal{Q}\_{ij}}{\partial \mathbf{x}\_k} \frac{\partial \mathcal{Q}\_{ij}}{\partial \mathbf{x}\_k},\tag{3.12}$$

$$f\_{\rm E}^{\rm FO} = \frac{1}{2}K\left[\left(\nabla \cdot \mathbf{n}\right)^2 + \left(\nabla \times \mathbf{n}\right)^2\right].\tag{3.13}$$

The eligibility of one-constant approximation strongly depends on the choice of the material. For a nematic liquid crystal such as 5CB, the values of elastic constants are in the range 10−12–10−<sup>11</sup> N and differ for around 40% [46]. Elastic constants *Ki* can also be strongly temperature dependent, but usually not with the same rate; therefore, ratios *K*3*/K*<sup>1</sup> and *K*2*/K*<sup>1</sup> typically also vary [47].

The effective ratio between the ordering free energy *f*NI and elastic *f*<sup>E</sup> contribution to the Landau–de Gennes free energy determines a characteristic length scale of nematics—the nematic correlation length *ξ*N. Within single elastic constant approximation and uniaxial order parameter tensor, *ξ*<sup>N</sup> equals [48]

$$\sharp\_{\rm N} = \sqrt{\frac{L}{A + B S\_{\rm eq} + \frac{9}{2} C S\_{\rm eq}^2}}.\tag{3.14}$$

Nematic correlation length determines spatial length scale for the variation of nematic degree of order; therefore, it roughly sets the defects size. For example, in molecular nematics, *ξ*<sup>N</sup> is of the order of few nm.

# *3.3.3 Surface Anchoring*

The Landau–de Gennes free energy density (Eq. (3.3)) determines the distortion energies in the bulk of the nematic fluids, and needs to be extended in the presence of surfaces with also surface free energy terms. Surfaces surrounding the nematic can affect the nematic ordering by imposing both preferred orientation and the degree of order. Surfaces can in principle impose arbitrary direction in space, with planar (tangential) anchoring and homeotropic (normal) anchoring being most common [49].

Uniform surface anchoring (homeotropic or other fixed direction) can be well described by using Rapini–Papoular like surface free energy density functional [50]:

#### 3 Mesoscopic Approach to Nematic Fluids 59

$$f\_{\rm H} = \frac{1}{2} W\_{\rm H} (\mathcal{Q}\_{lj} - \mathcal{Q}\_{lj}^{0})^2,\tag{3.15}$$

which quadratically penalises all deviations from the surface-preferred order parameter tensor *Q*<sup>0</sup> *ij* with the strength *W*H. Besides the preferred direction at the surface, tensor *Q*<sup>0</sup> *ij* imposes also surface degree of order and biaxiality. In case of homeotropic anchoring, the preferred tensor is constructed using the surface normal *ν*, so that *Q*<sup>0</sup> *ij* <sup>=</sup> *<sup>S</sup>*eq <sup>2</sup> *(*3*νiνj* − *δij )*. Typically values of the anchoring strength *W*<sup>H</sup> range from 10−<sup>3</sup> J*/*m2 (strong anchoring) to 10−<sup>7</sup> J*/*m<sup>2</sup> (weak anchoring) [51].

Some surfaces favour planar degenerate anchoring, where molecules have tendency to align along any direction within a plane, so all azimuthal angles are equally possible. Such anchoring can be described by introducing surface free energy potential [52]:

$$f\_{\rm PD} = W\_{\rm PD} \left( \tilde{Q}\_{lj} - \tilde{Q}\_{lj}^{\perp} \right)^{2},\tag{3.16}$$

where *W*PD is a constant measuring surface anchoring strength. Model penalises any deviations of *<sup>Q</sup>*˜ *ij* <sup>=</sup> *Qij* <sup>+</sup>*S*eq <sup>2</sup> *δij* from its projection to the surface *Q*˜ <sup>⊥</sup> *ij* = *PikQ*˜ *klPlj* . The projection matrix is defined using the surface normal *νi* as *Pij* = *δij* − *νiνj* . The term quadratically penalises deviations of *Q*˜ *ij* from its projection. Anchoring strength is frequently characterised with the Kleman–de Gennes extrapolation length *ξ*S, which is defined as follows [1]:

$$
\xi\_{\mathbb{S}} = K / W. \tag{3.17}
$$

It effectively measures relative strength between nematic elasticity and surface anchoring. Typically, extrapolation length is of the order of 10 nm for surfaces with strong anchoring and ranges to *ξ*<sup>S</sup> ∼ 10μm for surfaces with weak anchoring.

# *3.3.4 Electric Field Effects*

Due to their polarisability, nematics are highly responsive to external electric fields. In the Landau–de Gennes framework, the dielectric coupling between nematic and external electric field can be introduced as an additional free energy contribution *f*<sup>D</sup> as

$$\delta\_{\rm D} = -\frac{1}{2}\epsilon\_0 \left(\bar{\epsilon}\delta\_{lj} + \frac{2}{3}\epsilon\_a^{\rm mol}\mathcal{Q}\_{lj}\right)E\_l E\_j,\tag{3.18}$$

where *Ei* is the external electric field, <sup>0</sup> the dielectric vacuum permittivity constant, ¯ <sup>=</sup> *(*2<sup>⊥</sup> <sup>+</sup> *)/*3 the average liquid crystal permittivity, and mol <sup>a</sup> <sup>=</sup> mol <sup>−</sup> mol ⊥ the molecular dielectric anisotropy, which is connected to macroscopic dielectric anisotropy <sup>a</sup> <sup>=</sup> *S*mol <sup>a</sup> . mol <sup>⊥</sup> and mol are eigenvalues of dielectric permittivity tensor and correspond to eigenvectors perpendicular and parallel to the director. Typical values for 5CB at room temperature are <sup>a</sup> = 11 and *S* = 0*.*525, giving mol <sup>a</sup> = 21 [53].

The strength of the electric field can be characterised by introducing another length scale—the electric coherence length *ξ*E. The comparison between free energy due to the electric field [Eq. (3.18)] and elasticity (Eq. (3.7)) gives [1]

$$\xi\_{\rm E} = \frac{1}{E} \sqrt{\frac{L\_1}{\epsilon\_a \epsilon\_0}},\tag{3.19}$$

where *E* is a typical electric field in the sample. The effects of electric field are perceptible when the *ξ*<sup>E</sup> is small compared to the system size. In a typical nematic (*L*<sup>1</sup> <sup>≈</sup> <sup>10</sup>−<sup>11</sup> N, *a* <sup>≈</sup> 10) and electric field *<sup>E</sup>* <sup>=</sup> 1 V*/*μm the electric coherence length is *ξ*<sup>E</sup> ≈ 0*.*3 μm.

The competition between elasticity and electric field can be more comprehensively studied in the Fréedericksz cell [54]. It consists of nematic liquid crystal being oriented between two solid plates with strong anchoring. The preferential direction imposed by the surfaces may be parallel or perpendicular to the plates, whereas the electric field is always applied perpendicular to the orientational axis imposed by the surface anchoring. There are three different cell setups and each exactly refers to one of the three basic elastic deformation modes (Fig. 3.3), namely splay, twist, and bend [55]. If the electric field exceeds the threshold *E*c, the director field deforms. Critical electric field *E*<sup>c</sup> is given as:

$$E\_{\mathbf{c}} = \frac{\pi}{d} \sqrt{\frac{K\_l}{\epsilon\_0 |\epsilon\_\mathbf{a}|}},\tag{3.20}$$

where <sup>0</sup> is the vacuum permittivity, <sup>a</sup> the dielectric anisotropy, *d* the thickness of the cell, and *Ki* the elastic constant for splay, twist, and bend, respectively. In typical liquid crystals (*Ki* <sup>≈</sup> <sup>10</sup>−<sup>11</sup> N and <sup>a</sup> <sup>≈</sup> 10) in a Freedericksz cell with separation *d* = 20 μm the critical electric field is *E*<sup>c</sup> ≈ 0*.*05 V*/*μm.

# *3.3.5 Magnetic Field Effects*

Many nematic fluids are diamagnetic. For example, the diamagnetism is especially enhanced when the molecule is aromatic, because the benzene ring effectively acts as a coil. Free energy contribution describing the diamagnetic coupling *f*<sup>H</sup> can be introduced in analogous way as for the electric field, namely

$$f\_{\rm H} = -\frac{1}{2}\mu\_0 \left(\bar{\chi}\delta\_{lj} + \frac{2}{3}\chi\_{\rm a}^{\rm mol}\mathcal{Q}\_{lj}\right)H\_l H\_j,\tag{3.21}$$

where *Hi* is the external magnetic field, *μ*<sup>0</sup> the vacuum permeability constant, *<sup>χ</sup>*¯ <sup>=</sup> *(*2*χ*<sup>⊥</sup> <sup>+</sup> *<sup>χ</sup>)/*3 the average liquid crystal magnetic susceptibility, and *<sup>χ</sup>*mol a = *χ*mol <sup>−</sup>*χ*mol <sup>⊥</sup> the molecular dielectric anisotropy, which is connected to macroscopic magnetic susceptibility *<sup>χ</sup>*<sup>a</sup> <sup>=</sup> *Sχ*mol <sup>a</sup> . *χ*mol <sup>⊥</sup> and *<sup>χ</sup>*mol are eigenvalues of magnetic susceptibility tensor and correspond to eigenvectors perpendicular and parallel to the director. Values for a typical representative of thermotropic molecular nematic fluids MBBA at room temperature are *<sup>χ</sup>*<sup>a</sup> <sup>=</sup> <sup>1</sup>*.*<sup>23</sup> <sup>×</sup> <sup>10</sup>−<sup>7</sup> and *<sup>S</sup>* <sup>=</sup> <sup>0</sup>*.*525, giving *χ*mol <sup>a</sup> <sup>=</sup> <sup>2</sup>*.*<sup>34</sup> <sup>×</sup> <sup>10</sup>−<sup>7</sup> [1].

The strength of the magnetic field can be characterised by introducing the magnetic coherence length *ξ*<sup>B</sup> which determines the relative comparison between the free energy contributions due to the magnetic field [Eq. (3.21)] and the nematic elasticity (Eq. (3.7)) gives [1]

$$
\xi\_{\rm B} = \frac{1}{B} \sqrt{\frac{L\_1}{\chi\_{\rm a} \mu\_0}},
\tag{3.22}
$$

where *B* is typical magnetic field in the sample.

# **3.4 Topological Defects**

Frustration of nematic ordering by opposing surfaces or external fields leads to formation of defect regions, where molecular orientation is frustrated and has no preferential orientation. Defect regions are characterised by severe drop of nematic degree of order *S* (to *S* = 0) and strong spatial distortions of the nematic director **n**. Defects in nematic liquid crystal can be either points or lines [56] and are usually characterised by topological charges and winding numbers [2, 57–59].

Singular point defects form either in the bulk or on the surfaces. Frequently, point defects in the bulk are named "hedgehogs", whereas those on the surfaces are called "boojums". The topological charge *q* of point defects can be introduced as an integral over a closed defect-free surface surrounding the defect [2]

$$q = \frac{1}{8\pi} \oint\_{\Omega} \epsilon\_{ijk} \mathbf{n} \cdot \left(\frac{\partial \mathbf{n}}{\partial x\_j} \times \frac{\partial \mathbf{n}}{\partial x\_k}\right) \mathrm{d}S\_l,\tag{3.23}$$

where *ijk* is the Levi-Civita totally antisymmetric tensor and *xi* are Cartesian coordinates. Notice that *q* is odd in **n**, which causes that topological charge in nematics is not uniquely defined due to the **n** → −**n** symmetry. Three typically observed configurations of point defects with charge magnitude |*q*| = 1 are: radial **<sup>n</sup>** <sup>=</sup> *(x, y, z)/ <sup>x</sup>*<sup>2</sup> <sup>+</sup> *<sup>y</sup>*<sup>2</sup> <sup>+</sup> *<sup>z</sup>*2, circular **<sup>n</sup>** <sup>=</sup> *(y,* <sup>−</sup>*x, z)/ x*<sup>2</sup> + *y*<sup>2</sup> + *z*2, and hyperbolic **<sup>n</sup>** <sup>=</sup> *(*−*x,* <sup>−</sup>*y, z)/ x*<sup>2</sup> + *y*<sup>2</sup> + *z*<sup>2</sup> hedgehog.

The ± sign differentiates two vector fields with opposite topological charge (±), but represents the same physical director field. This sign ambiguity is always present in the nematic systems. Frequently, a convention of assigning +1 to the radial charge and −1 to the hyperbolic is used, but in general, the vectors in the entire sample must be oriented consistently and the topological charges assigned accordingly [33]. The elastic free energy of isolated singular point defects scales as *KR*, where *R* is the size of liquid crystal volume and *K* some value of Frank elastic constants.

Line defects, named also disclinations, are locally quantified with winding number (strength) *m* which characterises the symmetry of surrounding director field at some cross-section. For simplification let the disclination line be aligned parallel with the *z*-axis and the director field is observed in plane perpendicular to it. The in-plane director field can be parameterised with the director azimuthal angle *α* and the integral over closed loop gives local winding number

$$m = \frac{1}{2\pi} \oint\_{\Gamma} \mathrm{d}\alpha.\tag{3.24}$$

Winding number *m* can be integer and also half-integer, since the states **n** and −**n** are physically indistinguishable (Fig. 3.4). Note that the definition of winding number assumes that the director is confined to the 2D plane, perpendicular to the disclination. The in-plane director field of a disclination at the coordinate origin can be written as:

$$\mathbf{n} = (\cos \alpha, \sin \alpha, 0) = (\cos(m\phi + c), \sin(m\phi + c), 0), \tag{3.25}$$

with *c* being typically a constant, which sets the shape and relative orientation of the director field regarding the coordinate frame.

**Fig. 3.4** Schematic representation of director field surrounding disclination lines with various winding numbers *m*. *c* = 0 was used

#### 3 Mesoscopic Approach to Nematic Fluids 63

By surface integrating the Frank–Oseen free energy density (Eq. (3.13)) over the in-plane director field (using single elastic approximation), one can calculate free energy per unit length of the disclination line [2]

$$W\_{\rm def}(m) = \pi \, Km^2 \ln\left(\frac{R}{r\_{\rm core}}\right) + W\_{\rm core},\tag{3.26}$$

where *<sup>r</sup>*core is the core radius with energy per unit length *<sup>W</sup>*core <sup>∼</sup> *πm*2*K*, *<sup>K</sup>* is some function of Frank elastic constants, and *R* is the system size. Frank–Oseen approach does not apply for large gradients of the director; hence, the core is introduced to avoid the discontinuity of the director field in the centre of disclination. The proportionality *<sup>W</sup>* <sup>∝</sup> *<sup>m</sup>*<sup>2</sup> implies that one disclination of strength *<sup>m</sup>* = ±1 bears two times more energy than two disclinations of strength *m* = ±1*/*2. As a result in 2D systems only ±1*/*2 disclinations are stable. Note that total energy of the disclination line is linearly proportional with its length. Line defects can be also closed to loops.

# *3.4.1 Umbilic Defects*

In contrast to the standard defects in liquid crystals, with a discontinuity of the director field at the centre, umbilic defects are continuous everywhere and have no melted (isotropic) core. However, the discontinuity emerges only in the *projection* of the director field to a distinct plane (*xy*) perpendicular to the far-field orientation (*z*-axis) [60]. Therefore, it is necessary to bear in mind that umbilics are not fundamentally topological, as they can be continuously transformed into a homogeneous field. Umbilic defects are most commonly created by using electric fields in Hele-Shaw cells [61] with strong homeotropic anchoring, containing a nematic monocrystal with a negative dielectric anisotropy. The external field (above the critical value *E*<sup>c</sup> (Eq. (3.20))), applied perpendicular to the cell surfaces, induces bend distortions to the homogeneous alignment of the director field [55]. The situation is reminiscent to the Fréedericksz transition, the molecules tend to lie parallel to the surface in order to minimise free energy, but importantly no particular direction is preferred in this plane. This degeneration of the tilt direction leads to formation of umbilic defects (Fig. 3.5).

The distorted director field of umbilic defects can be written as [60–62]:

$$\mathbf{n} = (\cos(m\phi + \phi\_0)\sin\theta, \sin(m\phi + \phi\_0)\sin\theta, \cos\theta), \tag{3.27}$$

where *φ* is the azimuthal angle in *xy* plane, *φ*<sup>0</sup> some arbitrary constant, and *θ* the tilt angle. In Hele-Shaw cell of thickness *d* (plates being at −*d/*2 and *d/*2) the tilt angle *θ* depends on the intensity of the electric field *E* and can be written as:

$$
\theta(z) = \theta\_0 \cos\left(\frac{\pi z}{d}\right),
\tag{3.28}
$$

**Fig. 3.5** Schematic representation of umbilic defects with *φ*<sup>0</sup> = 0. (**a**) Hele-Shaw cell with the side view of the umbilic defect *m* = +1. The discontinuity emerges only in the projection of the director field to the plane parallel with the plates (purple). (**b**) Umbilic defects of various umbilic charge (top view)

where

$$
\theta\_0 \approx 2\sqrt{\frac{E - E\_\mathbf{c}}{E\_\mathbf{c}}}.\tag{3.29}
$$

Direction of the tilt may be described with the two-dimensional unit vector **c**, which resides in the *xy* plane, and gives the projected structure of the defect. The tilt directions **c** and −**c** are not equivalent, because **c** is an oriented vector, as a result umbilic charge *m* must be an integer. The core of umbilics is continuous, which allows for a full calculation of the elastic free energy. For example, for umbilics of strength *m* = +1, the free energy per unit length is proportional to *<sup>W</sup>* <sup>∝</sup> *<sup>K</sup>*<sup>1</sup> cos<sup>2</sup> *<sup>φ</sup>*<sup>0</sup> <sup>+</sup> *<sup>K</sup>*<sup>2</sup> sin2 *<sup>φ</sup>*0, whereas for the umbilics of strength *<sup>m</sup>* = −1 it is *<sup>W</sup>* <sup>∝</sup> *<sup>K</sup>*1+*K*<sup>2</sup> <sup>2</sup> [60, 63].

# *3.4.2 Basics of Topological Theory of Defects*

A comprehensive topological description of defects in liquid crystals requires involvement of the theory of homotopy [64]. The order parameter, namely the director, is a map from a real space, excluding the singularities, to the ground state manifold, which is the topological space of all possible states the director can occupy. The ground state manifold of nematic is real projective plane R*P*2, the top half of a unit sphere with opposite points on the equator identified [58]. The main consideration are mappings of *i*-dimensional spheres enclosing the defects in real space. A line defect can be enclosed by a linear contour (*i* = 1), whereas the point defect by a sphere (*<sup>i</sup>* <sup>=</sup> 2); therefore, they are mapped to <sup>R</sup>*P*<sup>2</sup> with different homotopy groups H*i*. Each element of homotopy group corresponds to a class of topologically stable defects, which can be continuously deformed one to the other [2]. They are topological invariants, previously referred to as the topological charges of the defects [59]. A defect-free state, where director field **n** is equivalent to a constant, corresponds to an identity element of the homotopy group and zero topological charge.

Defect loops and point defects can both be enclosed in a sphere *S*<sup>2</sup> and thus have a topological index in the second homotopy group. We say they are equivalent in sense of topology as they can be continuously deformed one into another. The winding number *m* defines the local symmetry of the director field surrounding the defect loop. However, it is the topological charge of a loop *q* (Eq. (3.23)), as in the case of point defects, that defines its global topological properties.

Topological charge needs to be preserved, also in the process of annihilation or creation of defects. As a result defects in form of points or lines can be created and annihilated in pairs of opposite sign. However, if the confining surfaces impose some preferential direction, this may, in addition with the genus of the surface, determine a non-zero net total topological charge. For example, in the case of droplet with homeotropic anchoring at the surface, the total net topological charge of the nematic within the droplet is *q* = 1. Similarly, the colloidal particles with certain surface anchoring effectively behave as point defects of certain topological charge *q*. This is compensated by the surrounding nematic with introduction of the defect with the opposite sign −*q* in order to preserve the total topological charge.

# **3.5 Nematodynamics**

In this section we shall discuss hydrodynamics of nematic liquid crystals, considering the coupling between the nematic orientational ordering and the material flow. The flow field of nematic is given by the generalised Navier–Stokes equation

$$
\rho \left[ \frac{\partial \upsilon\_l}{\partial t} + \left( \upsilon\_j \partial\_j \right) \upsilon\_l \right] = \partial\_j \sigma\_{lj}, \tag{3.30}
$$

where *ρ* is the density, **v** the velocity, and *σ* the stress tensor which includes beside the standard pressure also the dependence on the anisotropic nematic order in the system. The stress tensor can be written as a sum of the Ericksen stress tensor *σ* Er which includes the elasticity effects, and viscous stress tensor *σ*viscous. The incompressibility condition

$$
\partial\_j v\_j = 0 \tag{3.31}
$$

is assumed. Equations (3.30) and (3.31) have to be complemented by the equation for the evolution of the nematic order parameter, written either in the director or in the Q-tensor form. In the presented formulation, we use typical assumptions from the literature when considering dynamics of liquid crystals (in comparison to statics): two elastic constants and negligible moment of inertia of nematic molecules. Also, we omit the contributions of the electric and magnetic fields to the nematodynamics, which are sufficiently discussed elsewhere [1, 6].

# *3.5.1 Ericksen Stress Tensor*

Elasticity of the nematic internal structure can transmit stresses through the bulk. Consider, for instance, a pair of colloidal particles in a nematic held by an external force at distance *d* apart. The distortion of the director field and the nematic free energy depend on *d*. The force between colloidal particles is mediated by the director field and can be calculated through the Ericksen stress tensor. Ericksen stress tensor is derived by considering changes in the free energy due to displacement of nematic molecules [1]. In the director formulation, Ericksen stress is given by

$$
\sigma\_{ij}^{\text{Er}} = -\frac{\delta \mathcal{F}}{\delta \partial\_j n\_k} \partial\_l n\_k - \left(p\_0 - f\right)\delta\_{lj}. \tag{3.32}
$$

In the tensorial formulation, a similar expression holds:

$$
\sigma\_{lj}^{\text{Er}} = -\frac{\delta \mathcal{F}}{\delta \partial\_{\bar{f}} \mathcal{Q}\_{kl}} \partial\_{l} \mathcal{Q}\_{kl} - \left(p\_{0} - f\right) \delta\_{lj}, \tag{3.33}
$$

where *p*<sup>0</sup> is the external pressure, F the total free energy of the nematic, and *f* the bulk free energy density (Eq. (3.3)). In equilibrium, nematic may exert stress on the confining boundaries; however, equilibrium bulk forces, calculated from the divergence of the stress tensor, are exactly zero.

# *3.5.2 Ericksen–Leslie–Parodi Approach*

Ericksen–Leslie–Parodi (ELP) approach formulates the description of nematic hydrodynamics, which determines the coupling between the nematic director field and the velocity field. The model is written in terms of stress tensor *σij* , molecular field *hi* = −*δ*<sup>F</sup> *δni* , rotation rate of the director with respect to the background fluid *Ni* = ˙*ni* − *(*∇ × **v***)* × **n** *<sup>i</sup> /*2, and symmetric velocity gradient tensor *Aij* <sup>=</sup> *∂ivj* + *∂j vi /*2. ELP approach relies on considering the processes that contribute to the entropy production as expressions of *thermodynamic forces* (*σij* and *hi*) and *thermodynamics fluxes* (*Aij* and *Ni*). Phenomenological relations between forces and fluxes must reflect the nematic symmetry **n** → −**n** and obey the Onsager reciprocal relations. The resulting expression for the molecular field is [1]

### 3 Mesoscopic Approach to Nematic Fluids 67

$$h\_l = \wp\_1 N\_l + \wp\_2 A\_{lj} n\_j,\tag{3.34}$$

where *γ*<sup>1</sup> and *γ*<sup>2</sup> are the viscosity coefficients discussed below. The equation for the time derivative of **n**, derived from Eq. (3.34) has to include a Lagrange multiplier to preserve the unit length of the director:

$$\dot{m}\_{l} = \frac{1}{2} \left( (\nabla \times \mathbf{v}) \times \mathbf{n} \right)\_{l} + \frac{1}{\mathcal{Y}\_{1}} h\_{l} - \frac{\mathcal{Y}\_{2}}{\mathcal{Y}\_{1}} A\_{lj} n\_{j} - \Lambda n\_{l} \,. \tag{3.35}$$

Stress tensor is written within the ELP theory as:

$$
\sigma\_{ij}^{\text{viscous}} = \alpha\_1 n\_i n\_j n\_k n\_l A\_{kl} + \alpha\_2 n\_j N\_l + \alpha\_3 n\_l N\_j + \alpha\_4 A\_{lj} + \alpha\_5 n\_j n\_k A\_{lk} + \alpha\_6 n\_l n\_k A\_{jk} \,. \tag{3.36}
$$

The viscosity coefficients *γ*<sup>1</sup> and *γ*<sup>2</sup> are functions of the Leslie viscosities *αi*:

$$
\gamma\_1 = \alpha\_3 - \alpha\_2,\tag{3.37}
$$

$$
\gamma\_2 = \alpha\_6 - \alpha\_5 = \alpha\_2 + \alpha\_3. \tag{3.38}
$$

Six Leslie viscosities *αi* are constrained by Eq. (3.38), meaning that there are five independent parameters within the ELP approach. For typical thermotropic liquid crystals, such as 5CB or MBBA, they are of the order of magnitude 0*.*001– 0*.*1 Pa s [1] and can be measured by a variety of experimental techniques, such as observing liquid crystals under laminar flow, sound attenuation, time-dependent variation of orienting external fields, or scattering of light [1].

In order to better quantify nematic flow, one can construct and use relevant dimensionless numbers, as is also extensively used in general fluid dynamics. In experiments and simulation involving nematic flow, Reynolds number is typically smaller than 1. Also, Reynolds number does not include effects of orientational order. Better insight into nematic nature of flow is given by comparing elastic forces to the viscous forces in Eq. (3.34), which gives the Ericksen number

$$\text{Er} = \frac{\wp\_1 v/l}{K/l^2} = \frac{\wp\_1 vl}{K},\tag{3.39}$$

where *v* is a typical velocity of the problem and *l* a typical length scale. At small Ericksen numbers, the director dynamics is governed by the elastic terms. At large Ericksen numbers the dynamics is dictated by the velocity profile. Typical values for Ericksen number are Er ∼ 1 when considering annihilation of defect pairs [12, 13] or moderately slow flow in microchannels [14] and Er ∼ 20 for strong flow in microchannels [14].

Looking at the governing equations of the ELP approach, one can identify the basic mechanisms of nematic hydrodynamics. Below we show few selected basic examples of nematic flows. In Sect. 3.3 we have discussed how the nematic equilibrium orientation profile is defined by a minimum of the free energy. However, out of the equilibrium, nematic orientation field is deformed also by the velocity effects, as described, for example, by Eq. (3.35). Within ELP approach, the director field is distorted by velocity gradients that impose a hydrodynamic torque = **n**×**h** upon nematic molecules. At strong flows (i.e. strong Ericksen numbers) the director tends to align in the direction where the hydrodynamic torque vanishes. Director tilt angle, at which this condition is satisfied, is in simple geometries given by the Leslie angle *<sup>θ</sup>*<sup>L</sup> <sup>=</sup> <sup>1</sup> <sup>2</sup> arccos <sup>1</sup> *<sup>λ</sup>* , where *λ* is the alignment parameter, calculated from Leslie viscosities *<sup>λ</sup>* = −*γ*<sup>2</sup> *γ*1 . Figure 3.6 shows director structure for (a) Couette and (b) Poiseuille flow of a nematic fluid at Er ∼ 80 and *λ* = 1*.*1. The cell surface imposes strong homeotropic anchoring and the director deforms in the bulk due to hydrodynamic torques. For a Couette flow the director tilt angle in the middle of the sample is close to Leslie angle; however, next to the surfaces, it is continuously deformed to satisfy the boundary condition. Similar situation takes place in the Poiseuille geometry, only that the director tilts in the opposite direction when the shear is reversed. Alignment parameter *λ* typically reflects the shape of nematic molecules. For |*λ*| *<* 1, hydrodynamic torque does not vanish and the director field continuously deforms in time. An example of such tumbling motion is discussed in Sect. 3.6. *λ* can also have negative values, in which case it is associated with discotic molecules [65].

Viscous stress tensor (Eq. (3.36)) includes six terms that couple local viscous losses to the director and its time derivative. The meaning of the anisotropy in the stress tensor is clearly seen in a simple geometry, as first considered by Miesowicz. In Miesowicz geometry a nematic is confined between parallel plates and subjected to shear flow. Nematic director is fixed by a strong magnetic or electric field in the direction (Fig. 3.7a) perpendicular to the flow and to the shear, (Fig. 3.7b) along the flow, or (Fig. 3.7c) along the shear. In each of the three cases, stress tensor is substantially simplified and an effective viscosity can be determined. For MBBA, values for the Miesowicz viscosities are *ηa* ≈ 0*.*042 Pa s, *ηb* ≈ 0*.*024 Pa s, and *ηc* ≈ 0*.*104 Pa s [1]. Measurement of Miesowicz viscosities is an important contribution when determining a full set of Leslie viscosities *αi* in nematic fluids [1]. For MBBA, the lowest effective viscosity is in the case of the director pointing

**Fig. 3.6** Director distortion in (**a**) Couette and (**b**) Poiseuille flow at Er ∼ 80. Hydrodynamic torque due to velocity gradient tends to align the director tilt angle *θ* towards Leslie angle *θ*<sup>L</sup>

**Fig. 3.7** Miesowicz geometry of shear flow in nematic microfluidics. Homogeneous director field is fixed by a strong external field (**a**) perpendicular to the flow and the shear, (**b**) along the flow, or (**c**) along the shear. An effective viscosity *ηa*, *ηb*, or *ηc* can be determined from the viscous stress tensor

**Fig. 3.8** Director and flow field during the annihilation process of two opposite-charged umbilic defects. Director field is shown with white rods and flow field with green arrows. The position of approaching −1 and +1 umbilic is indicated by red dots. Reprinted figure with permission from [I. Dierking, M. Ravnik, E. Lark, J. Healey, G.P. Alexander, J.M. Yeomans, Phys. Rev. E **85**, 21703 (2012)]. Copyright (2012) by the American Physical Society

along the velocity field. For example, this characteristic can be used in flow-guiding through microfluidic junctions, as discussed in Sect. 3.6.

Viscous stress tensor (Eq. (3.36)) is dependent not only on the direction of the director, but also on its time derivative, which means that time-variation of the director field may induce flows in a nematic, as, for example, in the process of relaxation to a free energy minimum or adaptation to time-varying external fields. Such example is the annihilation of a defect pair, which induces a flow field due to the relaxation of the orientational structure. Interestingly, the speed of the defects is substantially altered by the presence of flow. In annihilation of a <sup>±</sup><sup>1</sup> <sup>2</sup> defect pair, in particular <sup>+</sup><sup>1</sup> <sup>2</sup> defect is advected by the flow, increasing the rate at which the defects are approaching each other [12, 13]. Figure 3.8 shows an example of the annihilation of two opposite-charged umbilic defects, which also approach each other and annihilate [61].

ELP approach gives the nematic contribution to the material flow and velocity contribution to the orientational dynamics in a formulation, where individual mechanisms are easily recognisable. It is particularly useful when considering analytical solutions to the problems of nematic hydrodynamics. However, it suffers from the drawbacks of a director formulation, in particular the defect cores have to be always exempt from the calculation, which can make modelling or theoretical analysis difficult. In the next two sections, we present Beris–Edwards and Qian– Sheng models that not only recover the ELP equations at uniform degree of order *S*, but also include the coupling between the flow field and the degree of order. These two models are among most commonly used among many different formulations of nematodynamics within the tensorial nematic order parameter [66–69]. Since formulation of nematodynamic equations in terms of the tensor order parameter eliminates the need for the special treatment of defects, it allows to explore problems of further complexity.

# *3.5.3 Beris–Edwards Model*

Beris and Edwards formulate their equations for nematic hydrodynamics through tensorial description of nematic order, where they utilise a generalisation of the Poisson bracket description of thermodynamics [5]. In a typical formulation, their equations are written as [70]:

$$
\dot{Q}\_{lj} = \mathbf{S}\_{lj} + \Gamma H\_{lj},\tag{3.40}
$$

$$\begin{split} S\_{lj} &= (\xi A\_{lk} - \Omega\_{lk}) \left( \mathcal{Q}\_{kj} + \frac{\delta\_{kj}}{3} \right) + \left( \mathcal{Q}\_{lk} + \frac{\delta\_{lk}}{3} \right) (\xi A\_{kj} + \Omega\_{kj}) \\ &- 2\xi \left( \mathcal{Q}\_{lj} + \frac{\delta\_{lj}}{3} \right) \mathcal{Q}\_{kl} \frac{\partial v\_{k}}{\partial x\_{l}}, \\ \sigma\_{lj}^{\text{viscous}} &= -\xi \, H\_{lk} \left( \mathcal{Q}\_{kj} + \frac{\delta\_{kj}}{3} \right) - \xi \left( \mathcal{Q}\_{lk} + \frac{\delta\_{lk}}{3} \right) H\_{kj} + 2\xi \left( \mathcal{Q}\_{lj} + \frac{\delta\_{lj}}{3} \right) \mathcal{Q}\_{kl} H\_{kl}. \end{split} \tag{3.41}$$

$$\begin{aligned} \left[ + \mathcal{Q}\_{lk} H\_{kj} - H\_{lk} \mathcal{Q}\_{kj} + 2\eta A\_{lj}, \\\\ \left( - \mathcal{Q}\_{lk} \right) \end{aligned} \tag{3.42}$$

where *ij* = *∂ivj* − *∂j vi /*2 and **H** is the molecular field defined as:

$$H\_{lj} = -\frac{1}{2} \left( \frac{\delta \mathcal{F}}{\delta \mathcal{Q}\_{lj}} + \frac{\delta \mathcal{F}}{\delta \mathcal{Q}\_{jl}} \right) + \frac{1}{3} Tr \left( \frac{\delta \mathcal{F}}{\delta \mathcal{Q}\_{kl}} \right) \delta\_{lj}. \tag{3.43}$$

Beris–Edwards model as formulated above has three independent viscosity parameters , *ζ* , and *η*, from which Leslie viscosities can be determined [70]:

$$\begin{aligned} \alpha\_1 &= \frac{\xi^2}{\Gamma} \frac{9S^2}{2} \left(3S^2 - 2S - 1\right), & \alpha\_4 &= \frac{\xi^2}{\Gamma} \left(S - \frac{2}{3}\right)^2 + 2\eta, \\\alpha\_2 &= -\frac{\xi}{\Gamma} \frac{S}{4} \left(3S + 4\right) - \frac{1}{\Gamma} \frac{9S^2}{4}, & \alpha\_5 &= -\frac{\xi^2}{\Gamma} \frac{S}{4} \left(3S - 8\right) + \frac{\xi}{\Gamma} \frac{S}{4} \left(3S + 4\right), \\\alpha\_3 &= -\frac{\xi}{\Gamma} \frac{S}{4} \left(3S + 4\right) + \frac{1}{\Gamma} \frac{9S^2}{4}, & \alpha\_6 &= -\frac{\xi^2}{\Gamma} \frac{S}{4} \left(3S - 8\right) - \frac{\xi}{\Gamma} \frac{S}{4} \left(3S + 4\right). \end{aligned} \tag{3.44}$$

The parameters in the Beris–Edwards model have a clear physical meaning. Rotational diffusion constant sets up the typical time scale of the dynamical processes in the nematic at a given length scale. Parameter *ζ* is directly related to the alignment parameter in the ELP representation *<sup>λ</sup>* <sup>=</sup> <sup>3</sup>*S*+<sup>4</sup> <sup>9</sup>*<sup>S</sup> ζ* , thus prescribing the Leslie angle in the shear flow or tumbling nature of the nematic. Parameter *η* affects the isotropic viscosity in the system.

# *3.5.4 Qian–Sheng Model*

A different nematodynamic model based on Q-tensor was formulated by Qian and Sheng [6]. Similar to ELP approach, in their derivation they follow the formalism of thermodynamic fluxes and forces, only within the description of the tensorial nematic order. Viscous stress tensor is written in Qian–Sheng formulation as:

$$\begin{split} \sigma\_{ij}^{\text{viscous}} &= \beta\_1 \mathcal{Q}\_{ij} \mathcal{Q}\_{kl} A\_{kl} + \beta\_4 A\_{ij} + \beta\_5 A\_{ik} \mathcal{Q}\_{kj} + \beta\_6 \mathcal{Q}\_{ik} A\_{kj} \\ &+ \frac{1}{2} \mu\_2 N\_{lj} - \mu\_1 N\_{ik} \mathcal{Q}\_{kj} + \mu\_2 \mathcal{Q}\_{ik} N\_{kj}, \end{split} \tag{3.45}$$

where *Nij* is the corotational derivative of the Q-tensor

$$N\_{lj} = \mathcal{Q}\_{lj} + \Omega\_{ik}\mathcal{Q}\_{kj} - \mathcal{Q}\_{lk}\Omega\_{kj}.\tag{3.46}$$

Time evolution of the Q-tensor is given by

$$
\dot{Q}\_{lj} = \frac{H\_{lj}}{\mu\_1} - \frac{\mu\_2 A\_{lj}}{2\mu\_1} + \mathcal{Q}\_{lk}\Omega\_{kj} - \Omega\_{lk}\mathcal{Q}\_{kj}.\tag{3.47}
$$

Note that from Eqs. (3.46) and (3.47) the corotational derivative *Nij* in the equation for the stress tensor can be expressed in terms of the molecular field *Hij* , which is a form more similar to Beris-Edwards expression (Eq. (3.42)).

Qian–Sheng model is formulated with six viscosity coefficients *β*1, *β*4, *β*5, *β*6, *μ*1, and *μ*2, linked by relation *β*<sup>6</sup> − *β*<sup>5</sup> = *μ*<sup>2</sup> The number of coefficients is exactly the same as in the Ericksen–Leslie theory. At a constant degree of order, coefficients can be exactly mapped between the two theories, thus allowing for the use of all of the experimentally measured viscosity coefficients given within the ELP formalism.

# *3.5.5 Towards Active Nematics*

Materials that exhibit inherent activity are chemically or biologically different than standard nematic liquid crystalline fluids. However, selected active materials show nematic order, as, for example, kinesin driven microtubule, bacterial colonies, or flocks of animals [19]. A possible approach to describe in particular dense suspensions of such active constituents is to adapt equations of nematic hydrodynamics, as discussed in previous sections, for example, by including an active stress tensor. Active stress arises due to the force profiles that active particles apply on the surrounding, and can be written in the form of [71]

$$
\sigma\_{ij}^{\text{active}} = -\alpha \mathcal{Q}\_{lj}. \tag{3.48}
$$

The active stress is proportional to nematic tensor order parameter, with the proportionality constant *α* being the activity. For active particles that exert contractile stress *α <* 0, and for extensile *α >* 0. In such model, if nematic alignment is homogeneous, divergence of the active stress tensor is zero and there are no effective active forces present. However, even in homogeneous alignment, active nematics and polar gels are prone to instabilities [72]. Active forces are particularly high close to defects, where gradients of *Q* are high. As shown in Fig. 3.9, active forces give rise to the self-propulsion of +1*/*2 defects, which is an important mechanism in chaotic flows in active layers [73], or—at the interface with the passive nematic active defects can even drive the distortion of the passive medium [74]. Note that in the addition to the presented there are other approaches for describing active nematic systems, such as Vicsek-like models [75], multiscale approaches [76], and minimal hydrodynamic models in terms of solely the velocity field [77].

**Fig. 3.9** Active flow, developed around +1*/*2 topological defect for active extensile nematic in cylindrical confinement. The director field (solid red lines) and the velocity field (blue arrows) show the mechanism of self-propulsion of a +1*/*2 defect, which competes with the elastic forces on the defect, leading to a fixed nematic structure in time. For contractile active nematics, the direction of self-propulsion of +1*/*2 defects is exactly opposite. Reprinted figure with permission from [M. Ravnik, J. Yeomans, Phys. Rev. Lett. **110**, 26001 (2013)]. Copyright (2013) by the American Physical Society

# **3.6 Nematic Microfluidics**

In this section we show selected examples of nematic flow in typically confined environment. Fluidity of nematics can have important consequences in many applications, such as in liquid crystal displays [78, 79], or it can lead to complex pattern formation, as, for example, in the process of electroconvection [80, 81]. Rheological properties have been studied for variety of liquid crystalline materials, ranging from thermotropic liquid crystals [14] to cholesterics [82] and suspensions of viruses [83].

# *3.6.1 Nematic Flows in Channels*

In Fig. 3.6 we showed nematic orientation in Couette and Poiseuille geometry at large Ericksen number and with strong homeotropic anchoring at the walls. Similar setup where the preferred alignment of the director at the walls is planar and perpendicular to the flow was investigated by Pieranski and Guyon [84]. An undistorted configuration in such Poiseuille geometry is shown in Fig. 3.10a. In particular at moderate Ericksen numbers, where nematic orientation is at a competition between elastic and hydrodynamic effects, there are two possible director orientations in such geometries of the shear flow, as shown in Fig. 3.10b. These two conformations occur in nematics with Leslie viscosities *α*<sup>2</sup> and *α*<sup>3</sup> of the same sign due to hydrodynamic torques that act on the director as soon as it slightly fluctuates from the undistorted alignment in Fig. 3.10a [2]. Since these

**Fig. 3.10** Instabilities in the Poiseuille flow in nematic channels with anchoring perpendicular to the flow and flow gradient. (**a**) Geometry of the problem with the director field undistorted by the flow. (**b**) Two possible configurations of the director in the weak shear gradient. Left cylinder shows the alignment preferred by the nematic elasticity and surface anchoring. (**c**) Creation of the rolls in the velocity field due to the force *FA*, two solutions shown in (**b**) alternate along the channel. (**d**) Experimental photograph of flow instabilities in Poiseuille geometry. Reused and adapted with permission from publisher [E. Guyon, P. Pieranski, Poiseuille flow instabilities in nematics. J. Phys. Colloq. **36**, C1 (1975)]

director distortions compete with the planar anchoring at the surfaces, the transition to the distorted state occurs after a certain finite shear threshold is exceeded [84]. Above the threshold the director configuration varies in the *z* direction, leading to an additional force in the *x* direction *FA* = *∂zσxz* that is shown in Fig. 3.10c. The force leads to the creation of rolls in the velocity profile (Fig. 3.10c), which have a well-defined wavelength. The rolls coincide with the two solutions for the director reorientation. In fact, depending on the driving pressure difference and frequency, a range of mechanisms can be found, which lead to hydrodynamic instability and creation of rolls [84, 85]. Figure 3.10d shows a photograph, revealing rolls with different wavelengths due to different hydrodynamic instabilities in nematics. This example shows the creation of instabilities in the velocity field and in the nematic orientation, once the coupling terms are introduced into the stress tensor and the equation for the director orientation.

As discussed in Sect. 3.5.2, in nematics with alignment parameter |*λ*| *>* 1, hydrodynamic torque disappears for certain angles of the director with respect to shear flow. For |*λ*| *<* 1 this is no longer the case and the hydrodynamic torque prefers to continuously rotate the director. In Ref. [86] two-dimensional nematic channels were explored in aligning and in tumbling regime. In the aligning regime, the director profile reaches a stationary orientation, which is not the case for tumbling motion. For straight channels, tumbling regime shows a series of *π* turns of the director across the channel. This turns are continuously generated and annihilated in time. In channels with variable width, director structure with *π* turns becomes unstable and pairs of opposite-charged defects are generated.

Dependence of the nematic viscosity on the director orientation can be used in microfluidic circuits to control the direction of flow and transport of material. One such example is given in Ref. [17], where electric field is used to switch nematic orientation in a channel. In Ref. [17] the preferred alignment of the director is along the channel, providing an effective Miesowicz viscosity *ηb* to the flow. Local electric field was used to impose director alignment along the shear, effectively increasing the viscosity for a factor of ∼4 to the Miesowicz value of *ηc*. When flow reached a Y-junction, most of the nematic fluid flows to the branch with the lower effective viscosity. It was shown that this mechanism could be used for particle sorting by turning the electric field on and off in individual branches of a Y-junction and by doing so controlling the flow through the junction. The colloidal particles go into the channel which has a stronger flow rate. Note that switching nematic orientation in the channel is not the only mechanism to transport and guide the cargo in nematic microfluidic circuits. Sengupta and co-workers demonstrated that in channels with hybrid anchoring conditions (anchoring along the normal at three sides of the rectangular channel, and anchoring along the channel at on side), a defect line can be guided through crossings of different channels. Colloidal cargo is then pinned to the defect line and advected along it by the flow [16].

# *3.6.2 Nematic Microfluidic Junctions*

Complex flow field profiles, as induced, for example, in junctions of nematic microchannels, can be used to create and study topological nematic defects in a controlled environment. In Ref. [87] junctions of 4, 6, and 8 microchannels are used to create nematic defects with effective charge −1, −2, and −3, respectively (Fig. 3.11a–c). The main mechanism for the creation of such defects is the fact that in the centre of a nematic microchannel at sufficiently large Ericksen numbers, the director turns along the channel, with the mechanism shown in Fig. 3.6b. Total net topological charge of −2 and −3 imposed in the junctions is realised by multiple defects with 3D topological charge of −1 as this is indeed energetically more favourable state that cannot be further divided in smaller individual charges. In observed microjunctions two topological structures are present—a defect in the orientational field of the nematic and a stagnation point in the velocity field. Figure 3.11d–g shows how the cross-talk between this two topological singularities is probed by applying a pressure pulse in one of the channels. A fast shift of the stagnation point is always followed by a slow response of the nematic defect. In a stable configuration the position of both structures coincides. If the pressure was reduced in one of the outflowing channels (Fig. 3.11e, g) nematic defect first moves

**Fig. 3.11** Creation of topological defects within nematic microfluidic junctions [87]. One, two, and three defects of charge −1 are created in junctions of 4, 6, and 8 microchannels, respectively. (**a**) Polarisation micrograph of the nematic structure. (**b**) Hydrodynamic stagnation point in the centre of a junction shown by epifluorescent imaging of fluorescent tracers. Panels (**a** and **b**) are courtesy of A. Sengupta. (**c**) Details of the nematic structure revealed by numerical simulations. The effective interaction between the stagnation point and the defect in the topological nematic orientation is probed by inducing a pressure pulse in (**d**) West or (**e**) South channel. The pressure pulse quickly shifts the stagnation point, marked by a white spot in the colourmap of the velocity magnitude. In (**d**) the relocation of the stagnation point is followed by a gradual shift of the nematic defect, as shown in (**f**). In (**e** and **g**) after the stagnation point is shifted, nematic defect first moves away from the stagnation point and only then gradually moves towards it. In both cases the stagnation point and the nematic defect return to original position after the pressure is restored

downstream and then gradually approaches the stagnation point, moving against the velocity direction. During the shift of the nematic defect, the stagnation point is more or less stationary. This is an example of a cross-talk between topological structures of different fields.

Nematic liquid crystals confined to porous networks are of particular interest due to their memory effects and switching possibilities, providing a route towards new optic and photonic materials [20, 88, 89]. The nematic alignment inside porous confinement can be controlled by flow [90, 91]. In Fig. 3.12 we show flow-induced dynamics of a defect structure inside a junction of six cylindrical capillaries. In a cylindrical confinement with homeotropic anchoring and without flow, nematic director prefers the escaped alignment, in which case the director in the middle of the channel points along the channel direction. This leads to a variety of equilibrium structures, depending on the direction of the director escape in individual channels [89]. One of such structures is shown in the first snapshot of Fig. 3.12, where a −1 topological defect resides in the centre of the junction. Preferred nematic alignment in a capillary when flow is switched on is with the direction of the director escape along the flow. This leads to the flow-induced

**Fig. 3.12** Flow-induced nematic structures in porous microfluidic channel networks [92]. Transformational dynamics of a −1 nematic defect in a junction of six cylindrical micropores is observed with the direction of the director escape in the initial equilibrium configuration away from the junction in left and right channel, and towards the junction in up, down, front, and back channel. Transformational dynamics is characterised by flow-induced director escape reversal in individual channels and merging of multiple defects into one. Time is measured in units of nematic characteristic time scale *<sup>τ</sup>*<sup>N</sup> <sup>=</sup> *<sup>ξ</sup>* <sup>2</sup> N *L*

reconfiguration of the defect structure in a microjunction (Fig. 3.12). Upon the director escape reversal in left and right channel two +1 defects are created. They merge with the −1 defect in the junction centre, forming a defect structure with topological charge of +1. Similar process is repeated as a −1 defect is created in the up and in the down channel which merge with the preexisting +1 defect, leading to the formation of a −1 defect in the junction. The position of the defect is slightly offcentre since it is advected by the flow. Fig. 3.12 shows a transformational dynamics of a −1 nematic defect, induced by the flow through the channels. Depending on the geometry of the initial equilibrium structure and the arrangement of the flow towards and away from the junction, a variety of switching processes and flowstabilised structures is possible [92]. This example shows how porous networks with microfluidic functionality can be turned into advanced platform for generation of various topological field states.

# *3.6.3 Colloidal Particles in Nematic Microfluidic Environment*

In nematic colloidal dispersions, the drag force exerted upon the spherical particles is dependent on the particle velocity with respect to the director far field and the nematic structure around the particle [93]. Similar to the viscosity in the Miesowicz geometry, the effective viscosity for spherical particles is higher if they are dragged perpendicular to the director, compared to the movement along the director. The problem of drag force on spherical particles even gains in complexity, if colloids are introduced in chiral nematic liquid crystals, as, for example, in Ref. [94], where spherical particles with planar degenerate anchoring were dragged through cholesteric by a constant force at small Ericksen numbers. It was observed that for Er 1 the drag force on the particle scales linearly with the velocity. However, there is a distinct dependence on the particle radius *R*: for the motion along the cholesteric pitch, effective viscosity scales as *η(R)* <sup>∼</sup> *<sup>R</sup>*0*.*7, while for the particle motion perpendicular to the cholesteric pitch no definite scaling of *η* with particle radius is observed [94]. In the next section we shall discuss further implications of colloidal nematic systems, in particular in the view of self-assembly and nematic configurations due to complex-shaped microparticles.

# **3.7 Nematic Colloids**

Nematic colloids are a soft material composed of particles, droplets, or bubbles embedded in a nematic fluid [21]. Nematic colloids attract great interest as they show effective elastic interaction between the particles, which originates from the nematic elasticity, shown in Sect. 3.7, in addition to conventional colloidal interactions such as steric, Coulomb, and van der Waals interactions. The exact profile, strength, and range of such elastic interactions are strongly affected by the surface properties of the particles, their shape, size, and topology, as well as external confinement, geometry, and possible external field. Nematic colloids are notably explored as novel materials with complex topological properties and as novel birefringent photonic materials, including for use as photonic crystals and metamaterials.

In this chapter we give review of selected nematic colloidal systems. First possible nematic director field configurations around a single spherical particle immersed in a liquid crystal are shown. Then elastic interparticle interactions are explained, which allow to organise colloids into larger structures. In the last part we introduce complex-shaped particles and particles with different topologies and their features.

# *3.7.1 Single Spherical Particle*

A colloidal particle immersed in a nematic deforms the director field where the deformation depends strongly on the boundary conditions at the surface of the particle. The resulting nematic configuration is typically governed by an interplay between the bulk elastic and surface free energy.

In case of weak homeotropic anchoring (Fig. 3.13c), the surface terms in the total free energy are smaller than the bulk terms and the director field remains almost undistorted. No topological defects occur in the bulk. However, if the anchoring is strong, the director around the particle typically imposes frustration on the surrounding bulk orientation. Therefore, the nematic director cannot adapt to this frustration without creating orientational singularities. A point like hyperbolic −1

**Fig. 3.13** Director field profiles around a spherical particle with homeotropic anchoring immersed in nematic fluid: (**a**) elastic dipole with hyperbolic −1 defect, (**b**) elastic quadrupole with singular Saturn ring defect, and (**c**) elastic quadrupole with no singular defect

defect (hedgehog) or a ring defect is formed around the particle with hometropic surface anchoring to achieve net zero topological charge characteristic of uniform field (Fig. 3.13a) [95]. The nematic configuration of particle accompanied with the hyperbolic −1 defect has the symmetry and profile of an elastic dipole. Namely, the defect causes the deformations of the nematic far-field director that mimic the dipolar electric field caused by electric charge distributions and have the same positional dependence in terms of multipolar expansion.

In addition to the elastic dipoles, an elastic quadrupole can form and is characterised by a −1*/*2 disclination loop encircling the particle (Fig. 3.13b) [96]. Saturn ring emerges in regimes of generally smaller anchoring strength, smaller particle size, or stronger confinement with geometry or external fields. Effectively, it can be pushed to the surface of the particle (or virtually even within the particle) if anchoring is weak enough. By opening the point defect into a ring, the head–tail symmetry is established when the ring reaches the equatorial plane and the defect structure together with the sphere represents an elastic quadrupole. Note also, that hedgehog −1 point defect is topologically equivalent to a −1*/*2 disclination loop (Saturn ring). In case of degenerate planar anchoring, two surface—boojum—defects are formed at the opposite poles of the particle, which also result in the quadrupolar nature of the structure. The nematic elasticity of the liquid crystals causes highly anisotropic interparticle interactions—i.e. with repulsive and attractive directions—and can lead to self-assembly of particles into larger structures.

# *3.7.2 Interparticle Interactions*

Elastic deformations of the nematic director field, caused by colloidal particles are energetically unfavourable which leads to inter-interactions between the particles that minimise regions of such distortions. The long-range orientational order of liquid crystal is reflected also in long-range nematic interparticle interactions.

The type of long-range interactions depends on the symmetries of the distortions in the director field, induced by particles. The force between two colloidal particles in a nematic host medium can be measured experimentally and it has been shown that the interaction potential between particles with strong surface anchoring, which generates hedgehog defect, is anisotropic and proportional to the third power of inverse distance between the particles, similar as dipole electrostatic interaction [25]. The interaction potential between particles with weak anchoring or with Saturn ring defect was shown to have quadrupolar symmetry and is proportional to the fifth power of the inverse distance between the particles [97]. The binding energy of approximately micron sized colloids can reach the order of 1000 *kBT* for dipolar type interactions and of the order of 100 *kBT* for quadrupolar interactions. Landau– de Gennes free energy approach—presented above—has been used to calculate interparticle interaction, giving excellent agreement with the experiments [48, 98].

Two equally oriented elastic dipoles in a uniform nematic cell attract if they are collinear (Fig. 3.14a). However, if oriented in the opposite directions they repel in

**Fig. 3.14** Micrographs, director field configurations, and polarisation micrographs for stable particle pair configurations of (**a**) parallel elastic dipoles, (**b**) anti-parallel elastic dipoles, and (**c**) elastic quadrupoles. From [S. Žumer, I. Muševic, M. Ravnik, M. Škarabot, I. Poberaj, D. Babic,ˇ U. Tkalec, Nematic colloidal assemblies: towards photonic crystals and metamaterials, SPIE Proc. **6911**, 69110C (2008)]. Reprinted with permission from SPIE Publications

the direction along the far-field director, but attract sideways (Fig. 3.14b). Because there are only two attractive sites available around the sphere for dipoles oriented in the same directions, they form linear chains along the direction of the director (Fig. 3.15) [99, 100]. One should also note that the particles do not come in full contact with each other—they are separated by a small margin, which indicates the presence of short range repulsion. Typically, those short range repulsive interactions are again of elastic origin resulting from significant short-particle distance director field deformations.

**Fig. 3.15** Aggregation of dipolar nematic colloids. (**a**) Optical micrograph of a single colloidal particle with the defect structure of dipolar symmetry. (**b**) Schematic presentation of the nematic director field, where A denotes the actual hyperbolic point defect and B the virtual radial point defect inside the particle. Together they form a topological dipole. (**c**) Elastic dipoles form linear chains. (**d**) Chains bond together into 2D crystalline colloidal cluster. Letters denote different types of bonds. Reprinted figure with permission from [M. Škarabot, M. Ravnik, S. Žumer, U. Tkalec, I. Poberaj, D. Babic, N. Osterman, I. Muševi ˇ c, Phys. Rev. E ˇ **76**, 51406 (2007) ] Copyright (2007) by the American Physical Society

Elastic quadrupoles bind in different directions as elastic dipoles. Analogous to the electric case, quadrupoles repel if they are perfectly aligned (*θ* = 0◦) or perpendicular (*θ* = 90◦) to each other and attract for some finite angle *θ* (Figs. 3.14c, 3.16a, b), which depends on multiple parameters and is generally in the range of *θ* ∼20–30◦. Elastic quadrupoles form zig-zag chains, generally perpendicular to the director field (Fig. 3.16c). The particles pairs can form via four different attractive sites, as shown also in 2D colloidal crystals (Figs. 3.15d, 3.16d). Typically, interactions of elastic quadrupoles are of one order of magnitude weaker than the interactions of elastic dipoles [101].

Interparticle interactions strongly depend on the configuration of the director field. In examples presented so far, colloidal particles were immersed in a nematic cell with a uniform field. However, additional types of structures can be observed

**Fig. 3.16** Particles assemble along different directions and form (**a**) kinked chains, (**b**) linear chains, (**c**) longer self-assembled chain structures and (**d**) 2D quadrupolar crystal-like structures. Reprinted figure with permission from [M. Škarabot, M. Ravnik, S. Žumer, U. Tkalec, I. Poberaj, D. Babic, N. Osterman, I. Muševi ˇ c, Phys. Rev. E ˇ **77**(3), 31705 (2008)]. Copyright (2008) by the American Physical Society

in more complex field configurations. As a strong example, multiple emulsions of nematic liquid crystal and water droplets were used to study colloidal interactions in a radial director field [101]. Conservation of topological charge was observed experimentally and the effects of different types of anchoring were studied.

In addition to dipole and quadrupole interactions, colloidal particles can be also bound by escaped defect lines, where the director escapes in the third dimension. Two particles with homeotropic anchoring can share an escaped—i.e. non-singular—line with an effective topological charge of −2 in a so-called bubblegum configuration [102] with the binding force being almost independent of the separation between the particles. In a nematic cell such configuration is rarely observed as the state is metastable; however, in twisted (chiral) cells such pairs form spontaneously and can also connect into larger 2D colloidal crystals [103].

Differently, two particles with Saturn ring defect can be entangled by a single escaped loop, acting as an elastic string [26]. Three different nematic configurations can be achieved by applying laser tweezers and thermally quenching (Fig. 3.17) the nematic around the particles: figure-of-eight (Fig. 3.17a), figureof-omega (Fig. 3.17c, g), and figure-of-theta (Fig. 3.17d, h). The states are again metastable in a uniform nematic cell, but enable binding multiple particles into linear structures. Spontaneous entanglement can be realised in a twisted nematic cell, which allowed for the investigation of the knot theory on the defect lines [104].

# *3.7.3 Assembly and Self-assembly of Colloidal Structures*

Colloidal systems attract major interest also because of their ability to interact with light [105]. Periodic structures of dielectric media with a cell size, comparable to wavelength of light, also known as photonic crystals, enable guiding and control of the light at the microscale level. Nematic colloids present an interesting platform for development of soft matter photonics due to their self- and directed-assembly, responsiveness to external stimuli, and strong binding interactions. They also show interesting potential in the development and research of topological photonic materials and metamaterials, including at nanoscale [106, 107].

**Fig. 3.17** Assembling entangled nematic colloidal pairs by thermal quench using light. (**a**) Figureof-eight. (**b**) Numerical simulation of the time evolution of entanglement measured in the number of iteration steps. (**c**) Evolution of figure-of-omega state. (**d**) Figure-of-omega state transformed into figure-of-theta. (**e** and **f**) Numerically calculated structures. Reprinted figure with permission from [M. Ravnik, M. Škarabot, S. Žumer, U. Tkalec, I. Poberaj, D. Babic, N. Osterman, I. Muševi ˇ c,ˇ Phys. Rev. Lett. **99**, 247801 (2007)]. Copyright (2007) by the American Physical Society

A method to assemble larger colloidal structures is by directed assembly using laser tweezers, which can capture and guide a single colloidal particle [99]. Particles are manipulated into vicinity of each other, close enough that structural forces can bind them into stable and ordered clusters. To create 2D crystal structures, thin nematic cells with properly processed surfaces are used, so that only one layer of colloids is formed in the middle of the cell. For example, clusters and 2D colloidal crystals are assembled from elastically dipolar particles by joining two oppositely oriented linear chains, which attract due to sideways dipolar attraction, or from quadrupolar particles by joining several kinked chains, which attract due to quadrupolar interactions [108].

In addition to homogeneous crystals, a range of binary structures were realised by using combinations of elastic dipoles and quadrupoles [109] or by using different

sized particles, creating hierarchical superstructures (Fig. 3.18) [110]. 3D colloidal crystal with tetragonal symmetry and interesting material properties, such as strong electrostriction and electro-rotation, has also been assembled by joining anti-parallel chains of elastic dipoles [7].

Approaches in which particles are controlled to occupy predesigned sites are developed by creating spatially variable nematic profiles. Since minimal energy configurations are different for particles with homeotropic or planar anchoring, they tend to localise in different regions, such as particles with homeotropic anchoring in regions of splay and particles with planar anchoring in regions of bend deformations [111]. Also sculpting a flat surface with a cavity that is similar to the particle in size and shape can change the sign of the interaction between the particle and the surface and lead to key-lock mechanisms for trapping the particles [112]. Topographic modulation of the surfaces can be used to select and localise particles by using convex and concave deformations [113].

The director field can also be altered by changing the geometry of the cell, containing the nematic host medium, which can lead to emergence of variety of defects, depending on the shape of the surface. For example, if a nematic medium is introduced into the cell with an array of cylindrical microposts, defects occur around them in the bulk and attract colloidal particles (Fig. 3.19a–g). Colloids assemble to mimic the defect structure in the bulk even if they are remote, i.e. on the surface of the liquid crystal layer in which microposts are submerged. In the case of high packing fraction, a triangular colloidal lattice is formed (Fig. 3.19h) [114]. In a different study, colloidal chains of elastic dipoles were found to follow the disclination lines and curved director field in the geometry of groovy cells [115].

**Fig. 3.19** If the geometry of the cell is altered by using microposts, the emergence of ring defects, which guide the assembly of remote colloidal particles, is enforced. (**a**) The director field in the corners of the cell with homeotropic anchoring can assume two different configurations with the opposite 2D topological charges. (**b**–**d**) The bulk director field corresponds to the minimum of free energy and has to satisfy the topological charge conservation. If the field in the corners has the same topological charge, the ring defect occurs to neutralise it. (**e**) By curving the edge, the configuration with positive winding number is favoured. (**f**) SEM image of experimentally realised curved microposts. (**g**) At moderate surface coverage, ordered rings assemble around the micropost due to attraction by the bulk defect and repel one another via long-range interparticle repulsion. (**h**) At higher particle density elastic interactions force them into higher order structures. Reused and adapted with permission from publishers [M. Cavallaro, M.A. Gharbi, D.A. Beller, S. Copar, Z. Shi, T. Baumgart, S. Yang, R.D. Kamien, K.J. Stebe, Exploiting imperfections in the bulk to direct assembly of surface colloids. Proc. Natl. Acad. Sci. **110**, 18804 (2013)]

Nematic interparticle interactions depend strongly also on the shape of the particles. For example, polygonal particle platelets with odd number of sides exhibit dipolar symmetry and therefore dipolar interactions, while the ones with even number of sides act as elastic quadrupoles [116]. Since nonspherical particles may interact as dipoles/quadrupoles at long range, but their short range interactions depend on the geometry, they are suitable for realising 2D and 3D crystalline, quasicrystalline, and various locally ordered low-symmetry structures, which cannot be assembled from colloidal spheres [30]. Similar results were observed when using colloidal pyramidal cones and octahedrons made from thin nanofoil, which are physical analogues of mathematical surfaces with boundaries and induce no defects when flat [117]. Also switching between repulsion and attraction through re-pinning the disclinations at different edges of polygonal prism using laser tweezers has been demonstrated [118].

Among other ways of control, chemical treatment can be used to switch between different types of cell surface anchoring even at the nanoscale and control colloids by inducing defects [119] or to switch between anchoring types on colloids and therefore manipulate the type of interaction [120]. If ferromagnetic particles are used, also magnetic field can be used to control the orientation of the particles and their mutual interactions [121].

# *3.7.4 Complex-Shaped and Topological Colloids*

The emergence of advanced chemical, physical, and biosynthetic methods in recent years enabled creation of complex-shaped anisotropic colloidal particles and even particles with different topologies. Combining such particles with liquid crystal medium lead to a wide variety of topological field states and configurations. In comparison to low temperature and magnetic systems at atomic scale, where topological phases also emerge, experiments with topology in liquid crystals can be observed at much larger optical scales. Additionally, complex structure of topological particles can lead to interesting assembly properties and interactions with light, which makes them suitable for photonic applications based on topological materials.

Topological colloidal particles with non-zero genus *g* (i.e. effectively, the number of holes in the particle) were demonstrated [28], which stabilise a wide variety of complex nematic profiles when immersed in nematic liquid crystal. The topology of the director field is governed by the topological charge conservation and by the Gauss–Bonnet and Poincare–Hopf theorems, which must be obeyed. By integrating the local Gaussian curvature *K* over the entire surface of the particle, its Euler characteristic, which is directly connected to the genus *g*, can be calculated as [28]:

$$2(1 - \mathbf{g}) = \chi = \frac{1}{2\pi} \oint K \mathbf{dS}.\tag{3.49}$$

The Euler characteristic acts as a topological invariant, which means that is preserved during the continuous transformations of the particle surface and also equals the topological charge of the surface. Net topological charge of the defects that emerge in the liquid crystal after the non-trivial particle is immersed in it is exactly determined by topological charge conservation, but the exact number of defects and their types depend on the shape and orientation of the particle with respect to the bulk field, which is determined by the minimum of the total free energy. Various possible configurations of defects in the vicinity of toroidal colloids with different genus numbers have been demonstrated [28].

Knot-shaped colloidal particles present another interesting platform to study interplay of the topologies of the particles, the nematic field, and the induced defects, which lead to knotted, linked, and other topologically non-trivial field configurations. Particle links in nematics were realised also as an example of topologically conditioned nematic colloidal material [8]. An example of configuration of linked rings, also known as Hopf link, with planar surface anchoring is shown in Fig. 3.20. Topological transitions by changing the shape and genus of the knot particles have been also studied numerically [122].

**Fig. 3.20** Hopf link colloidal particle with tangential anchoring in nematic. (**a**, **b**) Polarising optical micrographs of a Hopf link in a nematic cell without and with a lambda plate, respectively, with far-field nematic director *n*<sup>0</sup> marked on the image. (**c**, **d**) Experimental and theoretical in-plane cross-section with a director field around it. (**e**) Elastic free energy dependence on the deviation from equilibrium angle and centre-to-centre distance of the two link components—rings. (**f**–**h**) Three perspective views from mutually orthogonal directions of a numerically calculated director field around the link. (**i**) Landau–de Gennes free energy vs. angle *θ*1*,*<sup>2</sup> between bulk director orientation and the plane bisecting the angle *θ*1*,*<sup>2</sup> = 20◦ between the rings at fixed radius *R* and *d* = 0*.*22, with the energy minimum corresponding to the configuration f. Inset in (**i**, **j**) Metastable configurations. Reused and adapted with permission from publishers [A. Martinez, L. Hermosillo, M. Tasinkevych, I.I. Smalyukh, Linked topological colloids in a nematic host. Proc. Natl. Acad. Sci. **112**, 4546 (2015)]

Current research in the field of nematic colloids is more and more directed towards finding new possible interactions between particles, which are governed by their topologically non-trivial or fractal shape [123] that induces tangled director fields and defects in the liquid crystal medium. Such particles can nowadays be fabricated and used in experiments and show a lot of promise in photonic applications. A strong recent direction is also in exploring motile "active" colloids in anisotropic nematic background, which can be driven by external fields or internally via selfpropelled particles such as motile bacteria or molecular motors [74, 124, 125].

# **3.8 Conclusions**

Nematic fluids cover a span of materials, from molecular fluids, colloidal dispersion to viruses, with their main material characteristic being orientational order of the building blocks. This orientational order is soft and as an effective elastic medium responsive to external stimuli, including mechanical fields, pressure, light, electric and magnetic fields. The strong susceptibility to external stimuli makes nematic fluids potent materials in systems that require controllability and tunability, which is today extensively used in display and optical applications, with strong development also towards photonics and metamaterial applications.

A major emergent direction in nematic fluids are also active nematic materials, which are inherently out-of-equilibrium systems based on motile building blocks that can show nematic ordering. Active nematic systems include systems of kinesin driven microtubules, bacterial colonies, or flocks of animals. Topological defects in active fluids are emerging as major elements that determine the active material properties. And there exists an interesting route for transfer of knowledge from passive nematic fluids to active nematic fluids. For example, in terms of structure, nematic braids realised by temperature quench in passive nematics are probably the closest passive analogues to three-dimensional active turbulence, as one of the toplevel challenges in understanding of active nematics.

Finally, nematic fluids is a topic that is naturally in an interdisciplinary way reaching towards other field of science and technology, notably including complex flows and active fluids.

# **References**


### 3 Mesoscopic Approach to Nematic Fluids 89


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Amphiphilic Janus Particles at Interfaces**

**Andrei Honciuc**

# **Acronyms**


A. Honciuc (-)

Institute of Chemistry and Biotechnology, Zurich University of Applied Sciences, Waedenswil, Switzerland e-mail: andrei.honciuc@zhaw.ch

© The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_4

PS-PDIPAEMA polystyrene-poly[2-(diisopropylamino)ethyl methacrylate PtBA poly(tert-butyl acrylate)

# **4.1 Introduction**

Janus particles (JPs) can be generally defined as asymmetric particles with at least two surface regions or bulk composition differing in their physicochemical properties. JPs can adopt different shapes, for example, perfectly spherical with two hemispheres having different surface properties as depicted in Fig. 4.1, or they can adopt different shapes, such as snowman [1], or dumbbell, hybridisedlike [2] orbitals, mushroom [3] with clear geometrical and topological asymmetries. Under the same "Janus" category other particles, such as raspberry, [4] rods or discs [5] have also been included and generally all asymmetric particles, as long as there is a difference in composition or surface properties on the same particle but on distinctive regions. Probably the most typical shape of a Janus particle is that of a dumbbell or snowman, Fig. 4.1. Unlike homogeneous particles (HPs) the JPs have some interesting properties and exhibit extra-functionality conferred by their asymmetry. One example of such functionality is amphiphilicity. Due to the inherent polarity contrast between two surface regions, they resemble surfactants with one polar side and one-less polar; therefore, the JPs are promising as "solid

**Fig. 4.1** Types of Janus particles

state amphiphiles" or the next generation of amphiphiles. But unlike surfactants, generally small molecules, or low molecular weight polymers, JPs exhibit some significant differences. First, because they are solid-state particles and due to their size, they have large interfacial attachment energies, on the order of thousands of *k*T (parameter that scales with the *R*2, where *R* is the radius of the particle), meaning that once adsorbed at the interface they remain trapped and secondly their diffusion through the liquid is much slower. This can be an advantage because JPs can be used as emulsifiers of oils and water and create ultrastable Pickering emulsions. Pickering emulsions can also be generated with HPs but it has been shown that due to their amphiphilicity the JPs are several times more interfacially active and thus superior in such applications. Furthermore, the JPs are also active at the air–water interface and this makes them attractive as stabilisers of air bubbles and foams. Their size may also bring further advantages, for example, they can be used as carriers of actives (small molecules serving as pharmaceutically active ingredients), or smart catalysts moving in concentration gradients and even nanomotors for transportation of "heavy" cargo, which molecular surfactants cannot do. JPs can also be regarded as building blocks of matter that can self-assemble to give rise to suprastructures. JPs can be multifunctional because they can carry different properties on each lobe; this is especially attractive to creating new multifunctional materials where the surface and bulk-like properties can be combined to obtain unexpected functionalities. For example, it has been demonstrated that conductivity and surface polarity of snowman-type JPs can be tuned by changing the lobe ratio between a semiconductive lobe and an electrically insulating lobe [6]. This opens up the path to new multifunctional materials made from Janus building block that carry on the different lobes different functionalities, optic, magnetic, surface functional groups, etc.

# **4.2 Short History of Asymmetric Janus Particles**

Janus particles have a relatively short history and only came recently into existence through the imagination of a handful of scientists in the late 1980s. The concept of an asymmetric and amphiphilic particle was put forward by Casagrande, Veysié and de Gennes. The first synthesis of micron-sized JPs was attributed to the former two authors, while de Gennes baptised them after the Roman god Janus. Janus was the two-faced god of transitions, gates, passages and new beginnings, after which the month of January was named. Janus is a god in Roman mythology and was considered to be the most important one because it provided passage to the other gods. Janus is often depicted with two faces of an old man, but initially this was depicted with the face of young man looking back and old man looking forward, very suggestive of the time passage. In his famous Nobel Prize talk [7] de Gennes also called the asymmetric particle Janus grains. Interestingly, the amphiphilic nature of the Janus glass beads produced by Casagrande et al. [8] could be probed directly for the first time by observing from photographs of breath-patterns "figures de soufflé" clearly showing droplets of water pearling up on the hydrophobic side but making a contiguous film on the hydrophilic side. A few years earlier, in 1985, Grünning et al. [9] filed a patent claiming a procedure for the preparation of amphiphilic particles from 100μm hollow beads, with the exterior surface hydrophobised and hydrophilic side remaining in the interior, which were then crushed to produce irregularly shaped glass shards that were amphiphilic; this procedure was later detailed by one of the inventors, Rosmmy in 1998 [10]. Remarkably, right from the beginning the inventors proposed the use of such particles as surface active products, for emulsification, production of foams and deployment in tertiary oil recovery. This was the first time when amphiphilic particles were produced in large amounts. In 1991 Chen et al. [11] synthesised biphasic snowman type polymeric particles by seeded emulsion polymerisation and phase separation, but were probably unaware of the fact that they synthesised the first polymeric JPs. Within the 1990s to 2000s the scientific community did not seem to catch interest in these particles. It was then only much later, in the early of the first decade of the twenty-first century when the new synthetic routes were published. This was in part triggered by the theoretical work of Ondarçuhu [12] and later by Binks and Fletcher [13] who showed by calculation that the interfacial activity of spherical JPs at their maximum amphiphilicity (highest polarity contrast), in terms of interfacial desorption energy should be up to three times larger than that of HPs and much larger for snowman or dumbbell JPs. However, the desorption energy only is not a proper gauge for measuring the interfacial activity, but rather the ability to lower the interfacial tension, as it will be discussed later. Since then the research on Janus particles grew exponentially, estimated from the increasing number of publications each year. Initially the JPs could only be produced in very low amounts and the synthetic challenges were preventing their use in applications. Currently JPs can be produced in large amounts and therefore, their use in new applications is being explored, such as emulsifiers, catalysts, foam stabilisers, polymer blend compatibilisers, amphiphiles, building-blocks, etc. It turns out the Janus is not only reserved for synthetic particles but also naturally occurring proteins, such as the hydrophobins produced by fungi, HFBII from *Trichoderma Reesei* that is nearly globular with a 3 nm diameter and 7.2 kDa [14]. Interestingly, it has been shown that HFBII is an excellent foam stabiliser and is responsible for beer gushing. Beer gushing is the phenomenon of beer foam gushing out of the bottle when opened or mechanically shocked by hitting its bottom to the table, completely emptying the bottle [15]. The HFBII ends up in malt by fungi infection.

# **4.3 General Synthetic Routes**

JNPs can be prepared by chemical or physical methods. Next we give a short overview of the methods used in the synthesis of JPs.

# *4.3.1 Masking and Asymmetric Modification*

This preparation method relies on a simple concept, in a first step one side of a spherical particle is masked by a protective layer and in a second step chemical or physical modification is performed on the unmasked part followed by the removal of the masking layer. In this way the original HP particle is now a JP particle because it has different surface properties, even though its bulk composition remains the same. While the concept of the method is simple its implementation and scalability can be an issue. The masking and modification method was first used by Casagrande et al. [8] who used varnish to mask half of the micron-sized spherical glass beads and performed silanisation/hydrophobisation on the other half of the spherical bead to produce JPs. Instead of applying a varnish, a spherical particle can be deposited on a flat surface or self-assembled in a monolayer followed by the deposition, via evaporation, of a metal on the exposed part, Fig. 4.2. Due to the masking, the metal will only deposit on one side of the particle facing away from the flat substrate. In this way hybrid polymer/metallic, silica/metallic JPs can be made. This is a highly effective method to produce JPs with tunable optical or electric properties as the thickness of the metal layer deposited can be precisely controlled; Kawaguchi et al. [16] applied this method to produce JPs with a functional gold surface for control of surface plasmon resonance and thus the colour of the particles with potential applications in paper displays. Other patchy JPs with tunable optical properties were successfully prepared by Composto et al. [17]. Bimetallic Co/Ni Ag/Au, Ni/Au Janus particles could be made using the same method by Carroll et al. [18] from a single layer of self-assembled silica beads on a substrate in two steps:

**Fig. 4.2** Masking and surface modification techniques for manufacturing of Janus particles

first the particles were coated with one metal using e-beam evaporation, then the beads were inverted and coated with another metal. The preparation of JPs from 2D layers is highly effective but not scalable to produce large amounts. In this context Granick's group [19, 20] has succeeded in producing gram-scale amount of JPs by first taking fused silica particles HPs (800 nm and 1.5μm in diameter) and use them to emulsify molten wax at high temperature. Then the obtained o/w emulsion was cooled to room temperature to obtain solid wax colloidosomes that have HPs trapped/embedded on their surface. Because the HPs were half protected by the wax they could be chemically reacted with APS on the water exposed part, then after dissolving the wax colloidosome they could be hydrophobised on the other side with OTS. In this way gram-scale amounts of JPs could be obtained. Suzuki et al. [21] used the same strategy to prepare large amounts of JPs from thermo-responsive pNA microgels by first trapping them at the heptane/water via emulsification to create o/w Pickering emulsions. Once at the interface of the oil droplets the microgel particles were further reacted with water soluble only ethylenediamine (ED) and 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide hydrochloride (EDC) and amine groups were introduced this way at the surface of the water exposed gel particles. Smaller Au nanoparticles could be attached only on the -NH2 rich side of the stimuli-responsive Janus microgel particles. Fujimoto et al. [22] prepared Janus microspheres at the solid–liquid interface by allowing poly(methacrylic acid-conitrophenyl acrylate) microspheres to interact with a substrate on which human immunoglobulin (IgG) was previously adsorbed. After the microspheres settled on the flat substrate, NaOH was added to activate the ester bonds by cleaving the pnitrophenol. The part of the microsphere touching the IgG-substrate reacted with the amine and thiol bonds of the IgG molecule, while the other side remained unmodified.

# *4.3.2 Seeded Emulsion Polymerisation and Phase Separation*

The preparation of JPs by seeded emulsion polymerisation and phase separation of polymers was first performed by Chen et al. [11] in 1991, although probably unaware of the fact that these were being baptised Janus particles by P. de Gennes in the same year. The procedure is simple in theory, and starts with monodisperse seed latex polystyrene (PS) particles. Their methods were later revived and extended to a multitude of different biphasic polymeric Janus particles. The method consists of first preparing polymeric latex particles in the nanometre range, such as PS. Then these are re-dispersed in water and a second monomer (partially soluble in water) is added, such as MMA or 3-TSPM. Emulsification of the monomers is necessary for the swelling of the latex particles to succeed. Then the mixture is given some time for the swelling of the seed particle by the monomer to take place, eventually the polymerisation is started. As the monomer polymerises the newly created polymer, due to incompatibility with PS, bulges out from the seed particles leading to the creation of a second lobe and the formation of snowman

**Fig. 4.3** Cartoon depicting the preparation of JPs via seeded emulsion polymerisation and phase separation method. The second lobe grows as the monomer (M) from the reservoir is being consumed

or dumbbell type biphasic Janus particles as depicted in Fig. 4.3. The degree of phase separation as well as the final geometry of the obtained JPs depends on the Flory–Huggins interaction parameter, wettability between the two polymers, crosslinking degree, phase separation kinetics. Wettability between the polymers can also be influenced by initiator type, addition of surfactant, etc. [11]. Several different types of polymeric JPs could be made this way and with precisely tuned reaction conditions snowman-type particles could be produced. JPs with hard and soft lobes PtBA/PS JPs were prepared by Bon et al. [23], Daeyeon Lee et al. [24] prepared PS/PPA JPs that can be subsequently modified via thiol-yne click reactions, Sun et al. [25] obtained PS/PMMA with a hollow PS lobe, Hoffmann et al. [26] prepared PMMA/PS JPs from PMMA seeds, Weitz et al. [27] synthesised PS/PMMA and PS/PtBA seeds. Note the convention we have used, the initial seed particles are first written followed by the second lobe.

# *4.3.3 Microfluidic and Capillary Electro-Jetting Methods*

Multi-compartmented or multiphasic particles can be fabricated by microfluidic as well as electro-jetting processes [28]. Droplet microfluidics refer to the preparation and manipulation of discrete micron-sized droplets, double emulsions droplets, microbubbles, etc., and the fabrication of polymeric JP by this method has been extensively reviewed [29]. Black and white bicolored JPs with electrical anisotropy were synthesised for the first time by Torii et al. [30] using a microfluidic co-flow system. Pigments of carbon black and titanium oxide were dispersed in IBA then were separately introduced into the Y-junction at the same volumetric flow rate to form a two-colour stream followed by break-up into droplets due to surface tension further down into the stream channel. The obtained Janus droplets were polymerised outside of the microfluidic system by heating, but in principle polymerisation can also be done in the microfluidic channel by UV exposure. Multicoloured JPs with electrical anisotropy can be used in panel displays where the colour switch can be done by changing the orientation of the JPs and this can be actuated between two electrodes by applying voltage.

The electrodynamic co-jetting methods consist of flowing two or more different polymer solutions through a bi-compartmented metal capillary, while maintaining a laminar flow, at the apex of the capillary tip the polymer solution comes together to form a droplet to which a high electrical voltage is applied. The application of the electric field causes the solutions to form a Taylor cone, which creates a spray of individual droplets accelerating toward the counter electrode. During this, the solvents from the droplet evaporate, resulting in polymeric nanoparticles that are collected on the surface of the counter electrode. In this method due to the rapid evaporation of the solvents, and because the process is very fast, the polymers touching do not have sufficient time to mix, the resulting nanoparticles remain bi-compartmented. These methods have the great advantage of high-throughput to prepare large amounts of micrometre sized multiphasic Janus particles. Preparing of monodispersed nano-sized particle is more challenging; however, Lahann et al. [31] succeeded with the synthesis of PMMA/PtBMA biphasic Janus nanoparticles having diameters *d* = 172 ± 28 nm and tri-phasic [32] from poly(ethylene oxide), poly(acrylic acid) and poly(acrylamide-*co*-acrylic acid). Furthermore the individual polymer phases can be independently loaded with biomolecules or selectively modified with model ligands [33]. The resulting JPs are generally non-crosslinked, but it is possible to crosslink these by using a photoinitiator and by UV exposure after immediate generation of the oil droplets can lead to cross-linking of the polymers [34].

# *4.3.4 Polymer Co-precipitation and Phase Separation*

One of the most simple and feasible pathways to synthesise JPs is the phase separation of polymeric solutions, such as A/B homopolymer/homopolymer and AB/C copolymer/homopolymer blends dissolved under confinement followed by the evaporation of the solvent. This method usually involves an oil-in-water (o/w) emulsion system working as the confinement system, in which oil droplets comprise two incompatible polymers with a large Flory–Huggins interaction parameter, such as PS and PMMA, dissolved in a common solvent [35]. After evaporation, the solvent leaves behind solid well-defined particles, with the two separated polymer phases inside. Deng et al. [36] were able to prepare JPs with hierarchical structures from AB/C polymer blends. The preparation of polymeric Janus nanoparticles by this method involves first dissolving multiple, chemically distinct polymers in a mutually favourable solvent and gradually altering the solubility character of the solution until the polymer molecules co-precipitate as particles and different polymers phase separate. Priestley et al. [37] succeeded in scaling up preparation of JPs and multi-compartmented particles by co-precipitation and phase separation method from dissimilar polymers to up to 1400 kg/day by designing a so-called confined impinging jet mixer, a technique they called termed flash nanoprecipitation (FNP). In the FNP method, the PS and PI (with a Flory–Huggins interaction parameter *χ*PS-PI= 0.07) were dissolved at a certain ratio in a common solvent THF and injected in one of the arms of a fluidic device, in the same time the "anti-solvent", DIW, was flown through a second arm of a fluidic device. The two arms converged into a single one called the mixing region where the PI-PS co-precipitated and phase separated as the solvent rapidly exchanges with an "antisolvent". By changing the polymer feed concentration from 0.1 to 1.0 mg/mL they could systematically increase the size of the Janus nanocolloids from 125 to 540 nm in diameter. Anisotropy of the produced particles could also be changed by altering the PS–PI polymer ratio from 1:4 to 4:1 to produce multifaceted colloids. The later technology proves to be highly versatile, with perhaps the only disadvantage is that starting already from polymer chains the co-precipitation and phase separation methods do not offer the possibility for polymer cross-linking restricting the use of the obtained JPs, for example, as stabilisers in Pickering emulsions in which case the JPs would likely disintegrate/dissolve upon interaction with the oil.

# **4.4 Tuning the Surface Polarity in JPs**

Unlike the HPs whose surface polarity can be changed only by chemical means, tuning the surface polarity of the JPs can be done in a gradual and predictive way by adjusting the geometric ratio or the surface area between the lobes having different polarities. In this way, homologous series of JPs can be created, in analogy to homologous series of molecular surfactants, for example, by increasing the amount of the monomer to seed latex particles in seeded emulsion polymerisation, Fig. 4.4.

Wu and Honciuc [1] have synthesised a homologous series of PS/P(3-TSPM) JPs by changing the volume of 3-TSPM monomer to the PS seed NPs, see Fig. 4.4, and demonstrated that the PS lobe is the less polar than the P(3-TSPM) one; by increasing the size of the more polar P(3-TSPM) they could achieve polarity inversion in the homologous series purely by geometric means when the size of the P(3-TSPM) became larger than that of the PS lobe. The polarity inversion in the homologous series could be demonstrated from heptane–water emulsification experiments, whereas the JPs with smallest P(3-TSPM) lobe have high affinity to heptane, the largest P(3-TSPM) lobe JPs have higher affinity to water and as a consequence an inversion of the emulsion phase from w/o to o/w occurs at the middle of the homologous series. But the aspect ratio and implicitly the polarity of the JPs can also be tuned dynamically by stimuli, such as in stimuli-responsive particles that change both their geometry and polarity. One such example are the shape-changing and pH-responsive particles PtBA/PS produced in Lee's group [38] which upon cleaving the -tBA group by hydrolysis in the PtBA lobe, pH-responsive PA/PS JPs are obtained. At high pH the carboxyl groups are mostly ionised, resulting in large intake of water and swelling of the PA lobe; as a consequence, the particles become polar, have a good affinity to water and form o/w emulsions. At low pH, below the pKa value, the -COOH groups are protonated, non-ionic and

**Fig. 4.4** SEM images of PS/P(3-TSPM) JPs with progressively enlarged P(3-TSPM) lobe (lightgrey/white) from the same seed PS NPs (dark-grey). (**a**)–(**e**) JPs with progressively larger lobes obtained for a volume of 3-TSPM monomer (**a**) 0.5 mL, (**b**) 1 mL, (**c**) 2 mL, (**d**) 3 mL and (**e**) 4 mL added to 1 g of PS seed NPs; (**f**) EDX spectra, normalised with respect to the reference carbon peak of the PS seed NPs. (**g**) EDX mapping of "2 mL TSPM" JNPs obtained from larger seed PS NPs, 320 ± 5 nm diameter, showing asymmetric distribution of oxygen, silicon elements, namely a higher concentration in the P(3-TSPM) lobe in contrast to a symmetric distribution of carbon in both Janus lobes. Reprinted with permission from Ref. [1]. Copyright 2016 American Chemical Society

there is no water intake of the lobe; the geometry of the lobe remains rigid and the particles have higher affinity to the oil phase forming w/o emulsions.

**Surface Polarity Contrast Between Lobes: Quantification of Amphiphilicity** The JPs are amphiphilic because of the inherent surface polarity contrast between the lobes. The concept of amphiphilicity is however understood in a qualitative way and it denotes the ability of the amphiphile to adsorb and partition at the oil/water or air/water interfaces.

The earlier theoretical work of Ondarçuhu [12] already set the framework for estimating the amphiphilicity balance of spherical JP adsorbed at an interface (oil– water or air–water) by measuring its contact angle with reference to one of the phases, usually the water phase. His geometrical model was parameterised as to include the angle *α* that denotes the position of the boundary between the apolar and polar regions of the Janus lobes, see Fig. 4.5. The contact angle of the JP with the interface is given by *β*. For a perfectly spherical JP *α* is almost a measure of its amphiphilicity, because zero amphiphilicity (corresponding to homogeneous particles) corresponds to either *α* = 0 or 180◦. Strongest amphiphilicity is expected when *α* = 90◦. Two additional parameters were introduced, namely *θA* and *θP* which are the contact angle of one of the phases (depends on the chosen reference but typically water is taken as a reference) with each of the lobes, the apolar and

**Fig. 4.5** Model describing a spherical JP, apolar HP and polar HP at the oil–water interface. The parameters can be defined as follows: *α* keeps track of the position of the boundary between the apolar and polar regions of the JP, whereas *β* keeps track of the position of the oil–water interface relative to the particle centre and it represents the contact angle with the water phase. The angle *θ*<sup>A</sup> is the water contact angle of the apolar HP corresponding to the apolar region of the JP, the *θ*<sup>P</sup> is the water contact angle of the HP corresponding to the polar region of the JP

the polar, respectively. These angles can be better understood if HPs corresponding to each of the JP lobes are depicted at the interface as in Fig. 4.5. For JPs no amphiphilicity is expected when *θA* − *θP* = 0◦ and the strongest amphiphilicity for *θA* − *θP* = 180◦, meaning that one lobe has a perfectly hydrophobic surface and the other a perfectly hydrophilic surface, respectively.

Later Jiang and Granick [39] introduced the concept of Janus balance or "Jvalue" to effectively quantify the amphiphilicity as the dimensionless ratio of work to transfer an amphiphilic JP from the oil–water interface into the oil phase, normalised by the work needed to move it into the water phase:

$$J = \frac{\sin^2 \alpha + 2 \cos \theta\_P (\cos \alpha - 1)}{\sin^2 \alpha + 2 \cos \theta\_A (\cos \alpha + 1)},\tag{4.1}$$

where the angle *α*, *θA* and *θP* have the same meaning as those depicted in Fig. 4.5. The above equation shows that Janus balance depends on the relative areas of hydrophilic and hydrophobic lobe, quantified by *α* and on the hydrophobicity of the two sides, quantified by *θA* and *θP* . When *θA* and *θP* are fixed, *J* increases as *α* increases (because cos *θP <* 0 ) meaning a larger hydrophilic area. When *α* is fixed, *J* increases when *θA* and *θP* increase, which corresponds to the hydrophilic part becoming more hydrophilic or the hydrophobic part becoming less hydrophobic. The larger the magnitude of *J* , the more hydrophilic is the JP, which follows the same trend as that of the HLB[40] for surfactant molecules: larger HLB meaning a higher affinity for water. The *J* value can be therefore calculated from the interfacial contact angle and the geometry of Janus particles. However, the above model has two caveats: first the model was deduced by assuming a perfect orientation of the particle at the oil/water interface, i.e. the Janus axis perpendicular to the interface, and second it assumed a perfectly spherical particle, but *α* loses its meaning for a snowman, dumbbell or any other shape of the JP and the problem has to be reparameterised. Therefore, the above model is not generally applicable. Based on these calculations Binks and Fletcher [13] have shown that the interfacial activity of a JP can be up to three times larger than that of an HP. This is lately taken, mistakenly, as an upper limit of what JPs can achieve but in fact the interfacial activity of these dual particles can be significantly larger than a factor of three for other geometries, as shown in simulations by Gao et al. [41]. However, it would be more useful if amphiphilicity can be discussed quantitatively and can be measured.

In a different approach to quantify the amphiphilicity of any shape of JPs Honciuc et al. [1, 6, 42] have proposed the direct calculation of the HLB balance of a JP using Griffin's approach [43], that is, the same model used to calculate the HLB balance for a homologous series of surfactants [40] and adapted it for JPs:

$$\text{HLB} = 20 \frac{A\_{\text{Polar}} F\_1}{A\_{\text{Polar}} F\_1 + A\_{\text{Apolar}} F\_2},\tag{4.2}$$

where *A*Polar is the area of the polar lobe, *A*Apolar is the area of the non-polar lobe and in addition we have introduced the weighting factors *Fi* (*i* = 1*,* 2) accounting for the "degree" of polarity of the lobes. The original approach of Griffin for surfactants did not account for the polarity of the surfactant moieties, but only considers their relative molecular weights 20*Mw*(*polar)/Mw*(*molecule)*. The above equation takes the value of 20 for *F*<sup>2</sup> = 0 and 0 for *F*<sup>1</sup> = 0, which are two limiting situations: strongly polar and apolar particles, respectively, with no amphiphilicity. On the other hand a value of *F*1= 1 (hypothetical 100% polar surface) and *F*2= 1 (hypothetical 100% non-polar surface) assumes an "ideal" polarity contrast between the two surface regions, see Fig. 4.6a, and thus, the HLB is decided by the geometry of the lobes, i.e. their aspect ratio. The polarity weighting factors *F* can be calculated from the ratio between the polar and apolar or dispersive surface energy components for each of the Janus lobes, as depicted in Fig. 4.6:

$$F\_1 = \frac{\nu\_1^{p^\*}}{\nu\_1^p + \nu\_1^d} \quad \text{and} \quad F\_2 = \frac{\nu\_2^d}{\nu\_2^p + \nu\_2^d},\tag{4.3}$$

where the small Greek gammas are the surface energies and the superscripts "*p*" and "*d*" indicate the polar and dispersive or apolar surface energy components of the corresponding Janus lobes, subscripts 1-polar lobe and 2-apolar lobe. In practice, to determine *F* one must know the surface energy and its polar and dispersive components which is not trivial. Recently, Mihali and Honciuc [6] have measured the surface energy and the polar/disperse components of each Janus lobes in a homologous series of semiconducting PPy/P(3-TSPM) JPs with increasing size of the polar lobe and with these values they have calculated the corresponding weighting factors *F* and subsequently the HLB values, these are given in Table 4.1.

**Fig. 4.6** (**a**) Hypothetical amphiphilic dumbbell Janus particle displaying an ideal polarity contrast between a purely dispersive surface and "purely polar" surface of the two Janus lobes, where the polarity factors are *F*<sup>1</sup> = *F*<sup>2</sup> = 1; (**b**) the more realistic representation of a snowman Janus particle with the surface polarity of the lobes departing from ideality and whose polarity factors, *F*<sup>1</sup> and *F*<sup>1</sup> can be calculated with Eq. (4.3); (**c**) the parameters of a JNP used to calculate areas of the lobes, radius *R* and height of the lobe *h*


**Table 4.1** Aspect ratio and HLB values in the homologous series of JNPs

aThe areas of the lobes were calculated from the equation *<sup>A</sup>* <sup>=</sup> <sup>2</sup>*πRh*, where *<sup>h</sup>* is the height of the

Janus lobe and *<sup>R</sup>* is its radius, Fig. 4.6c bValues calculated with Eq. (4.2) with *<sup>F</sup>*<sup>1</sup> <sup>=</sup> *<sup>F</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup> <sup>c</sup>*F*<sup>1</sup> and *<sup>F</sup>*<sup>2</sup> were calculated with Eq. (4.2) dValues calculated with Eq. (4.2)

From Ref. [6]. Copyright ©2017 by John Wiley Sons, Inc. Reprinted by permission of John Wiley & Sons, Inc.

Interestingly the HLB values calculated for the JPs excluding the weighting factors, in the 5th column of Table 4.1, are similar with those calculated after taking into account the degree of polarity of the lobes, in the 8th column of Table 4.1. That is because the measured *F*-values are close to unity and thus the JPs have an almost ideal polarity contrast. Calculating HLB value this way for a homologous series of JPs has the advantage of being able to predict the behaviour of JPs with respect to their ability to act as w/o or o/w emulsifiers, discussed later. HLB range, taking values from 1 to 20, is more readily understood by scientists working with surfactants. For example, amphiphiles with values below 10 on the Griffin's scale are good w/o emulsifiers (good affinity to the oil phase), while those with HLBs above 10 are good o/w emulsifiers, which was clearly shown by the emulsification experiments [1]. We believe this method to quantify the Janus balance is universally applicable because it makes no assumptions about the particle geometry, orientation or its position at interface.

# **4.5 Interfacial Activity and Adsorption at Interfaces**

It is well known that particles can spontaneously adsorb at liquid–liquid and air– liquid interfaces and can thus lower the system's Gibbs free energy, which translates into the reduction of the interfacial IFT. By monitoring the decrease of the interfacial tension with time, i.e. the dynamic surface tension, usually with pendant drop tensiometry, one can obtain information about JPs' interfacial adsorption kinetics. The bulk diffusivity of any particles obeys the Stokes diffusion law. For particles with appropriate wettability the bulk-to-surface diffusion may lead to the adsorption and attachment at the interface if: (a) there is no strong electrostatic repulsion interaction between the interface and the particle (image charge repulsion) and (b) the energy costs related to the surface dehydration and re-solvation of the surface by the next solvent are not too high.

The adsorption kinetics of particle adsorption may operate in different regimes, diffusion limited, activation energy limited or a combination of both. Therefore, it is expected that the decrease in IFT vs. time is slower for larger particles as compared to the smaller ones. Furthermore, according to the time evolution of the interfacial tension, the adsorption is characterised by three adsorption stages, depicted in Fig. 4.7: (I) the free diffusion of some particles to the interface (II) continuous adsorption of Janus particles to form domains at the interface, and (III) particle packing and rearrangement in compact domains/islands [41, 44].

**Fig. 4.7** Cartoon depicting the time evolution of the IFT and the three main stages of JPs adsorption: (I) JPs adsorb at a pristine oil–water interface, the diffusion from bulk-to-surface may be the limiting step, but also a large activation energy barrier to adsorption, (II) continuous adsorption of JPs at the interface and formation of domains and 2D islands and (III) full occupation of the interface by JPs, particle–particle repulsive interactions contribute to increasing the energy barrier to adsorption and interfacial re-organisation of the monolayer

The magnitude of IFT, which is the distance between the starting value at *t* = 0 and *t*plateau at which IFT does not decrease anymore, Fig. 4.7, indicates how effective the JPs are at lowering the interfacial tension. The magnitude of IFT encompasses several phenomena, such as the ability of the particles to pack in a compact monolayer, surface and interfacial energy of the particles with both phases and the particle–particle lateral interactions. Hypothetically, the JPs can lower the IFT to almost zero if their interfacial energy with both phases is zero, perfect affinity for each lobe for the oil and water phases, and have the ability to pack compactly leaving no free oil–water interface. It has been predicted [13] and experimentally demonstrated that JPs have a significantly higher ability to lower the IFT than the HPs [45, 46]. The reason of that is that JPs constituted of two parts can achieve better solvent-JP and water-JP compatibility (that translate in very low interfacial energies), which cannot be achieved in HPs. In the case of JPs not only the size but also the shape affects the IFT. As the shape changes from sphere to disc and rod, Gao et al. [41] observed different adsorption kinetics, different packing behaviours and ultimately different IFT values. For the three types of Janus particles with the same surface area, the ability to decrease the interfacial tension increases from Janus sphere to Janus disc to Janus rod. The particle-interface interaction has also been shown to play a role, for example, the particle with a large zeta potential seems to adsorb better at the interface due to an interfacial charge re-distribution due to the strong electric field of the particle that locally inverts the charge density of the air–water interface [47]. The way the JNPs can assemble at the oil–water interfaces can be affected by their aspect ratio, i.e. the geometrical packing parameter and by the polarity contrast, analogue to molecular surfactants.

# *4.5.1 Contact Angle and Interfacial Adsorption Energies of HPs vs. JPs*

The adsorption of the JPs at interfaces can be treated purely thermodynamically, whereas only the initial and final states are taken into account regardless of the path the system takes. For example, the Gibbs free energy of JPs' adsorption at oil–water interface can be calculated by taking into account the free energy of the particle in one of the bulk phases as the initial state and the free energy of the particle at the interface between two phases. The difference between the two will be the Gibbs free energy of JPs' adsorption or minus desorption energy. For an HP this can be easily calculated, for example, at the air–water interface if we know its contact angle *β* with water, Fig. 4.8.

The free energy can be calculated for any particle simply by multiplying its interfacial energy also known as the energy density (mJ/m2) by its exposed area to each liquid forming the interface and for the case in Fig. 4.8 the free energy at the interface is:

**Fig. 4.8** Spherical particle adsorbed at the air–water interface. The contact angle with water *β* can be determined in two equivalent ways: (left) between the position of the air–water interface with the surface of the particle or (right) from the angle formed between the radius pointing at the three-phase line. The parameters *a* and *d* are the immersion depth of the particle in water measured from the centre and apex of the particle, respectively, and *d* is the immersion of the particle in the second phase, air or oil

$$\begin{aligned} E\_{\text{interface}} &= \mathcal{\chi}(\text{HP}, \text{air}) A\_{\text{(HP}, \text{air})} + \mathcal{\chi}(\text{HP}, \text{water}) A\_{\text{(HP}, \text{water})} \\ &+ \mathcal{\chi}(\text{air}, \text{water}) A\_{\text{(circular base radius r)}} \end{aligned} \tag{4.4}$$

The explicit expressions of the areas are:

$$A\_{\text{(HP, air)}} = 2\pi R(R - a) = 2\pi R[R - R\sin(\pi/2 - \beta)] = 2\pi R^2(1 - \cos\beta) \quad (4.5)$$

$$A\_{\text{(HP,water)}} = 2\pi R(R+a) = 2\pi R[R + R\sin(\pi/2 - \beta)] = 2\pi R^2(1 + \cos\beta)$$

$$A\_{\text{(circular base radius r)}} = \pi r^2 = \pi \left[ R \cos(\pi/2 - \beta) \right]^2 = \pi R^2 \sin^2 \beta,\tag{4.7}$$

where the last expression accounts for the area of the excluded air–water interfacial area.

Finally, the free energy of the HP at the interface is:

$$\begin{split} E\_{\text{interface}} &= \gamma\_{\text{(HP,air)}} 2\pi \, R^2 (1 - \cos \beta) + \gamma\_{\text{(HP,water)}} 2\pi \, R^2 (1 + \cos \beta) \\ &+ \gamma\_{\text{(air,water)}} \pi \, R^2 \sin^2 \beta. \end{split} \tag{4.8}$$

The Gibbs free energy of a single HP completely immersed in water will be:

$$E\_{\text{water}} = \mathcal{Y}\_{\text{(HP,water)}} 4\pi \,\text{R}^2. \tag{4.9}$$

Therefore, the total interfacial adsorption energy only as a function of the contact angle *β* will be:

$$E\_{\text{water}} - E\_{\text{interface}} = \gamma\_{\text{(HP,air)}} 2\pi \,\text{R}^2 (1 + \cos \beta) - \gamma\_{\text{(HP,water)}} 2\pi \,\text{R}^2 (1 + \cos \beta)$$

$$-\gamma\_{\text{(air,water)}} \pi \,\text{R}^2 \sin^2 \beta. \tag{4.10}$$

Pieranski [48] was among the pioneers to attempt calculating the particle energies at interfaces and the equations proposed were as a function of the HPs immersion depth "*a*", Fig. 4.8, which is the distance from the centre of the particle to the interface. This can obviously also be determined directly without the need to measure the contact angle directly from SEM of cryogenised particles trapped at interfaces [49]. The above expression as a function of "*a*" would be:

$$\begin{split} E\_{\text{interface}} &= \chi\_{\text{(Paride, air)}} 2\pi R(R - a) + \chi\_{\text{(Paride, water)}} 2\pi R(R + a) \\ &+ \chi\_{\text{(air, water)}} \pi (R^2 - a^2), \end{split} \tag{4.11}$$

which after rearrangement becomes:

$$E\_{\text{interface}} = \chi\_{\text{(Particle, air)}} 2\pi \, R^2 (1 - a/R) + \chi\_{\text{(Particle, water)}} 2\pi \, R^2 (1 + a/R)$$

$$+ \chi\_{\text{(air, water)}} \pi \, R^2 [1 - (a/R)^2]. \tag{4.12}$$

The depth *a* is related to the interfacial tension via the Young–Dupré contact angle:

$$\cos \beta = \frac{a}{R} = \frac{\mathcal{Y}(\text{HP}, \text{air}) \pm \mathcal{Y}(\text{HP}, \text{water})}{\mathcal{Y}(\text{HP}, \text{water})}. \tag{4.13}$$

The sign in parenthesis is the negative for *β <* 90◦ and positive for *β >* 90◦.

Very often in different works one finds the following expression as the energy with which a small particle of radius *R* is held at the air–water or oil–water interfaces[50]:

$$E\_{\text{interface}} = \mathcal{Y}\_{\text{(air,water)}} \pi R^2 [1 \pm \cos \beta]^2,\tag{4.14}$$

where the sign in parenthesis is the negative for *β <* 90◦ and positive for *β >* 90◦. This form of the last equation results from inserting the expression of cos *β* in Eq. (4.13), defined by the Young–Dupré expression, into the *E*interface expression Eq. (4.11) obtaining[51]:

### 4 Amphiphilic Janus Particles at Interfaces 113

$$\begin{split} \boldsymbol{E}\_{\text{interface}} &= \boldsymbol{\gamma}\_{\text{(iHF,air)}} 2\pi \, \mathsf{R}^2 \, (1-\cos\beta) + \boldsymbol{\gamma}\_{\text{(iHF,water)}} 2\pi \, \mathsf{R}^2 (1+\cos\beta) \\ &\quad + \boldsymbol{\gamma}\_{\text{(air,water)}} \pi \, \mathsf{R}^2 \, \sin^2 \beta \\ &= 2\pi \, \mathsf{R}^2 \left[ \boldsymbol{\gamma}\_{\text{(iHF,air)}} + \boldsymbol{\gamma}\_{\text{(iHF,water)}} + \cos\beta (\boldsymbol{\gamma}\_{\text{(iHF,water)}} - \boldsymbol{\gamma}\_{\text{(iHF,air)}}) \right] \\ &\quad + \boldsymbol{\gamma}\_{\text{(air,water)}} \pi \, \mathsf{R}^2 \, \sin^2 \beta \\ &= 2\pi \, \mathsf{R}^2 \left[ \boldsymbol{\gamma}\_{\text{(iHF,air)}} + 2\boldsymbol{\gamma}\_{\text{(iHF,water)}} - \boldsymbol{\gamma}\_{\text{(iHF,water)}} - \boldsymbol{\gamma}\_{\text{(air,water)}} \cos^2 \beta \right] \\ &\quad + \boldsymbol{\gamma}\_{\text{(air,water)}} \pi \, \mathsf{R}^2 \sin^2 \beta \\ &= 4\pi \, \mathsf{R}^2 \boldsymbol{\gamma}\_{\text{(iHF,water)}} + 2\pi \, \mathsf{R}^2 \left[ \boldsymbol{\gamma}\_{\text{(air,water)}} \cos \beta - \boldsymbol{\gamma}\_{\text{(air,water)}} \cos^2 \beta \right] \\ &\quad + \boldsymbol{\gamma}\_{\text{(air,water)}} \pi \, \mathsf{R}^2 \sin^2 \beta. \end{split}$$

Keeping in mind that sin2 *<sup>β</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> cos<sup>2</sup> *<sup>β</sup>*,

$$E\_{\text{interface}} = 4\pi \, R^2 \, \chi\_{\text{(Particle, water)}} - \pi \, R^2 \, \chi\_{\text{(air,water)}}$$

$$\times \left[1 - 2\cos\beta + 2\cos^2\beta - \cos^2\beta\right]$$

$$= 4\pi \, R^2 \, \chi\_{\text{(Particle, water)}} - \pi \, R^2 \, \chi\_{\text{(air,water)}}(1 - \cos\beta)^2,\tag{4.16}$$

where the first term in the last equation is the interfacial energy of a sphere completely immersed in water; therefore, the last term is the change in interfacial energy of the particle attached to the air–water interface, i.e. the final state. For example, if *γ(*air*,*water*)* <sup>=</sup> 75 mJ/m2 and *<sup>R</sup>* <sup>=</sup> <sup>1</sup>μm and *<sup>β</sup>* <sup>=</sup> <sup>90</sup>◦, then the energy holding the particle at the interface at room temperature would be 2*.*<sup>4</sup> <sup>×</sup> <sup>10</sup>−<sup>13</sup> <sup>J</sup> <sup>≈</sup> <sup>5</sup>*.*<sup>7</sup> <sup>×</sup> 107 kT, where 1 kT <sup>=</sup> <sup>4</sup>*.*<sup>114</sup> <sup>×</sup> <sup>10</sup>−<sup>21</sup> J at 298 K.

In order to calculate the Gibbs free energy of JPs adsorption at air–water interface, consisting of two hemispheres, one apolar (A) and one polar (P), one needs to calculate the energy difference between the particle entirely immersed in water and that adsorbed at air–water interface. This can be done following strictly the Ondarçuhu [12] nomenclature, which assumes a perfectly spherical JP and its adsorption at the interface occurs via partial dehydration of the surface of one lobe, as depicted in Fig. 4.9.

# A. Case of partial de-wetting of the A-lobe, *θ>α* and the general case *α* = *π/*2.

In order to solve this, the expression of the energy for each interface must be found. Next, we find the expressions of *E(*A*,*air*)*, *E(*A*,*water*)*, *E(*P*,*water*)* and *E(*air*,*water*)* as a function of contact angle,

$$E\_{\rm (A,air)} = \chi\_{\rm (A,air)} A\_{\rm (A,air)} = \chi\_{\rm (A,air)} 2\pi R^2 (1 + \cos \theta),\tag{4.17}$$

**Fig. 4.9** A perfectly spherical JP adsorbed at the oil–water interface. (Left) Case of partial dewetting of the A-lobe, for *θ>α* and the general case *α* = *π/*2. (Right) Case of partial de-wetting of the P-lobe, *θ<α* and the general case *α* = *π/*2

where the area is given by

$$\begin{split} A\_{\text{(A,air)}} &= 2\pi \,\text{R} \,d = 2\pi \,\text{R} (R - b) \\ &= 2\pi \,\text{R}^2 [1 - \sin(\pi/2 - \beta)] = 2\pi \,\text{R}^2 [1 - \sin(\pi/2 - \pi + \theta)] \\ &= 2\pi \,\text{R}^2 [1 - \sin(-\pi/2 + \theta)] = 2\pi \,\text{R}^2 [1 + \sin(\pi/2 - \theta)] \\ &= 2\pi \,\text{R}^2 (1 + \cos \theta), \end{split} \tag{4.18}$$

and sin*(*−*π/*2 + *θ )* = − cos *θ*

$$E\_{\rm (A,water)} = \gamma\_{\rm (A,water)} A\_{\rm (A,water)} = \gamma\_{\rm (A,water)} 2\pi R^2 (\cos \alpha - \cos \theta), \tag{4.19}$$

where

$$b = R\sin(-\pi/2 + \theta) = -R\sin(\pi/2 - \theta) = -R\cos\theta\tag{4.20}$$

$$a = R\sin(-\pi/2 + \alpha) = -R\sin(\pi/2 - \alpha) = -R\cos\alpha\tag{4.21}$$

$$A\_{\text{(A,water)}} = 2\pi R^2 (\cos \alpha - \cos \theta). \tag{4.22}$$

#### 4 Amphiphilic Janus Particles at Interfaces 115

The surface area of the zone, excluding the top and bottom bases is: *A(*A*,*water*)* = 2*πRh*, where *h* = *b* − *a* and is the height of the zone.

$$E\_{\text{(P,water)}} = \gamma\_{\text{(P,water)}} A\_{\text{(P,water)}} = \gamma\_{\text{(P,water)}} 2\pi R^2 (1 - \cos a),\tag{4.23}$$

where the area is:

$$A\_{\text{(P,water)}} = 2\pi Rd' = 2\pi R(R+a) = 2\pi R^2(1-\cos\alpha) \tag{4.24}$$

$$E\_{\text{(air,water)}} = \mathcal{\chi}\_{\text{(air,water)}} A\_{\text{(circular base radius r)}} = \mathcal{\chi}\_{\text{(air,water)}} \pi \,\mathcal{R}^2 \sin^2 \theta. \tag{4.25}$$

Therefore, the total Gibbs free energy of the JP at the interface is:

$$\begin{split} E &= 2\pi R^2 \Big[ \chi\_{\text{(A,air)}} (1 + \cos \theta) + \chi\_{\text{(A,water)}} (\cos \alpha - \cos \theta) \\ &+ \chi\_{\text{(P,water)}} (1 - \cos \alpha) - \chi\_{\text{(air,water)}} \frac{1}{2} \sin^2 \theta \Big]. \end{split} \tag{4.26}$$

This equation has several limiting cases, for example, if the Janus lobe is a perfect hemisphere, then *α* = *π/*2 and the above equation becomes:

$$\begin{split} E &= 2\pi \, R^2 \Big[ \chi\_{\text{(A,air)}} (1 + \cos \theta) - \chi\_{\text{(A,water)}} \cos \theta + \chi\_{\text{(P,water)}} \\ &- \chi\_{\text{(air,water)}} \frac{1}{2} \sin^2 \theta \Big]. \end{split} \tag{4.27}$$

Also, if instead of the parameters *α* and *θ* we use instead only the contact angle *β*, where *θ* = *π* − *β*, then the equation becomes:

$$\begin{split} E\_{\text{interface}} &= 2\pi \, R^2 \Big[ \nu\_{\text{(A,air)}} (1 - \cos \beta) + \nu\_{\text{(A,water)}} \cos \beta \\ &+ \nu\_{\text{(P,water)}} + \nu\_{\text{(air,water)}} \sin^2(\beta) / 2 \Big]. \end{split} \tag{4.28}$$

B. Case of total de-wetting of the A-lobe and partial de-wetting of the lobe P, *θ<α* and the general case *α* = *π/*2.

$$E\_{\rm (A,air)} = \mathcal{\chi}\_{\rm (A,air)} A\_{\rm (A,air)} = \mathcal{\chi}\_{\rm (A,air)} 2\pi \, R^2 (1 + \cos \alpha),\tag{4.29}$$

where

$$A\_{\rm (A,air)} = 2\pi R(R - a) = 2\pi R \left[ R - R \sin(\pi/2 - \beta) \right]$$

$$= 2\pi R \left[ R - R \sin(\pi/2 - \pi + \alpha) \right] \tag{4.30}$$

$$= 2\pi R \left[ R + R \sin(\pi/2 - \alpha) \right] = 2\pi R^2 (1 + \cos \alpha)$$

$$E\_{\rm (P,air)} = \mathcal{Y}\_{\rm (P,air)} A\_{\rm (P,air)} = \mathcal{Y}\_{\rm (P,air)} 2\pi R^2 (\cos \theta - \cos \alpha),\tag{4.31}$$

where

$$R - b = R - R\sin(\theta - \pi/2) = R + R\sin(\pi/2 - \theta) = R + R\cos(\theta) \tag{4.32}$$

$$R - a = R + R\sin(\pi/2 - \alpha) = R + R\cos(\alpha)\tag{4.33}$$

$$A\_{\text{(A,water)}} = 2\pi R^2 (\cos \theta - \cos \alpha) \tag{4.34}$$

$$\begin{cases} E\_{\text{(P,water)}} = \chi\_{\text{(P,water)}} 2\pi R^2 (1 - \cos \theta) \\ A\_{\text{(P,water)}} = 2\pi R d' = 2\pi R (R + b) = 2\pi R^2 (1 - \cos \theta) \end{cases} \tag{4.35}$$

*E(*air*,*water*)* <sup>=</sup> *γ(*air*,*water*)A(*circular base radius r*)* <sup>=</sup> *γ(*air*,*water*)πR*<sup>2</sup> sin2 *<sup>θ</sup> A(*circular base radius r*)* <sup>=</sup> *π r*<sup>2</sup> <sup>=</sup> *πR*<sup>2</sup> cos2*(π/*<sup>2</sup> <sup>−</sup> *<sup>π</sup>* <sup>+</sup> *θ )* <sup>=</sup> *πR*<sup>2</sup> sin2 *θ.* (4.36)

Therefore, the total Gibbs free energy of the particle at the interface is:

$$\begin{split} E\_{\text{interface}} &= 2\pi \, R^2 \Big[ \chi\_{\text{(A,air)}} (1 + \cos \alpha) + \chi\_{\text{(A,water)}} (\cos \theta - \cos \alpha) \\ &\quad + \chi\_{\text{(P,water)}} (1 - \cos \theta) - \chi\_{\text{(air,water)}} (\sin^2 \theta) / 2 \Big]. \end{split} \tag{4.37}$$

The surface energy of the Janus particles located completely in air(or oil) *E*air*(*oil*)* or water *E*water can be easily found by setting the angle *β* in the particle Gibbs free energy equations to 0◦ or 180◦, respectively. Further, Binks and Fletcher [13] also suggested that the interfacial activity of the particle should be evaluated by the magnitude of its energy of interfacial detachment that should be equal to *E*water − *E*interface but this can be debated. Because the above energies scale with *R*2, this criterion of evaluating the interfacial activity of the particle cannot be used to compare particles of different size as the larger particle or even large particle, let's say a tennis ball would be the most interfacially active particle according to this framework of thinking. Instead we proposed that the measurable effects in the drop of the surface tension of the liquid or interfacial tension of oil–water should be instead used as criterion for estimating the interfacial activity of particles. In addition, this method is difficult to apply for JPs that are dumbbell shaped, for example, because it is difficult to keep track of the position of the long axis, and long axis tilt with the respect of surface normal, but in principle it can be done. However, determining accurately the contact angle on particles it is very difficult and therefore Ondarçuhu's model is difficult to apply in practice. Therefore, determining the Gibbs free energy of a particle at interface and especially of JPs is very difficult.

# *4.5.2 Inter-Particle Interaction at Interfaces vs. Lowering the Interfacial Tension*

Once adsorbed at the oil–water and air–water interfaces the HPs are capable of interaction with other particles via electrostatic forces, van der Waals forces but also capillary action due to the interface deformation that lead to repulsive and attractive interactions. Because the latter is particularly true for the case of particles that strongly deform the interface, or very rough particles that pin the interface line, due to pinning the interface at a rough particle surface, for smooth colloids this is rarely the case. Capillary interactions occur especially when large particles, depending on their surface roughness, wettability by the liquid and buoyancy effect lead to interfacial deformation, by changing the local curvature of the interface. The effect of the capillary interaction can be attractive or repulsive depending on the local curvature of the interface [52]. The capillary interaction forces between particles bound at interfaces have been fully discussed by Kralchevsky and Nagayama [53]. This is also known as Cheerios effect, where the grains of cereals floating on the milk clump together exactly due to this effect, deformation of the milk–air interface.

On the other hand, the surface charge on the JPs' surface arises due to the ionisation of the charged groups in water. The interaction between particles is generally well described by the Derjaguin–Landau–Verwey–Overbeek (DLVO) theory. The expression for the overall electrostatic pair-potential interaction between two point charges at the air–water interface was first derived by Stillinger [54] using the Debye–Hückel theory. In non-aqueous solvents colloidal particles can also carry charge, but in non-aqueous solvents this is typically much less than in water due to lack of ionization of surface functional groups, therefore it is considered less important in the stabilization of colloids. Therefore, upon adsorption the part of the particle immersed in the water phase will remained charged, while the other in the non-polar solvent or air will be neutral or less charged which leads to an asymmetric formation of the double layer. This leads to the formation of a dipole with its vector oriented perpendicular at the interface. The parallel orientation of the dipolar interfacial vectors oriented in the same way leads to repulsion. Therefore the overall contribution to the interaction pair-potential *U (r)* are the repulsive doublelayer interactions at the short range and dipole–dipole repulsion at longer range [55]. The magnitude of the dipole–dipole interaction is critical for particle assembly and ordered 2-D crystal monolayer formation at the interface as noted by Pieranski [48]. Van der Waals interaction may also play a role but its effects are negligibly small compared to the electrostatic repulsion.

With respect to interfacial tension or surface tension modification by particles the adsorption of the HPs at the interface is expected to have some measurable effects. In a recent study [56] the interfacial activity of simple non-amphiphilic silica nanoparticles at air–water and hexadecane–water interface done with pendant drop tensiometry has shown that indeed simple homogeneous particles produce a notable effect in IFT drop upon interfacial adsorption. Silica HPs with most "favourable" wetting, close to 90◦ reach the interface, are able to produce the strongest effect on the IFT drop. The magnitude in IFT drop is expected to be dependent on the bulk particle concentration. Much earlier, Okubo [57] has shown the same effect on PS nanoparticle, with most noticeable effect in IFT drop occurred when the particle concentration was large enough that crystallites formation was already noticeable in the bulk phase. Notable is also the work of Johnson et al. [58–60] that studied the effects of TiO2 nanoparticle adsorption and found a similar dependence in IFT drop with increase in HPs concentration. A measurable drop in the IFT of an air–water or oil–water interface can occur as a consequence of the weakening of the cohesion forces in the top-most interfacial layer, as it is the case for surfactants possessing an alkyl tail which sticks out into the non-polar phase or air and is only able to weakly interact with the neighbouring adsorbed particles, thus weakening the cohesion and the IFT. Striolo et al. [61] have shown that the IFT decreases significantly when the surface coverage is large enough that repulsive HP–HP interactions are expected. The existence of strong inter-particle capillary interaction may lead to an increase in the observed IFT as observed by Johnson and Dong [59].

Similar behaviour is expected for JPs, however with some differences. If the JPs consist of the polar part and an apolar part decisive for their interfacial behaviour is the polarity contrast between the Janus lobes and the role these play in JP–JP interfacial interaction. Ignoring the interfacial deformation effects and excluding the capillary interactions, the interfacial activity determined by the IFT can be more dramatic then say that observed for the HPs with the same size, surface properties and composition as each of the Janus lobe. As pointed out earlier the interfacial activity of the particles cannot be estimated solely by the interfacial adsorption/desorption Gibbs free energy parameter because this scales with *R*2, which becomes ambiguous when comparing particles that differ in size. Instead we propose that the interfacial activity should be evaluated by the ability to decrease the IFT of an interface—this is Janus effect. The Janus effect has been demonstrated by Fernández-Rodríguez et al. [31] on PMMA/PtBMA JPs microparticles fabricated by electrohydrodynamic co-jetting method and by us on PDIPAEMA/PS JPs nanoparticles [62]. Yet, another example of enhanced interfacial activity of JPs compared to the constituting HPs is that of Glaser et al. [45] which observed a significant decrease in the IFT vs. time of Au/Fe2O3 JPs at hexane–water interface. The ability of JPs to adsorb at interfaces and lower the IFT may find important applications in oil recovery applications [63].

# *4.5.3 Activation and Adsorption Energies of JPs Spontaneously Adsorbing at Interfaces*

The adsorption of particles at interfaces is mostly entropically driven, the overall free energy of the system decreases due to increase in the water entropy; the ordered water layer on the surface of the particles becomes free upon particle interfacial adsorption. The dehydration and re-solvation of the surface involves however some

**Fig. 4.10** Cartoon depicting the adsorption energy *E* and the activation energy *Ea* for interfacial adsorption of JP at the oil–water. The JPs are drawn with a hypothetical orientation, long axis perpendicular to the interface

energy costs and it is one of the factors contributing to the magnitude of the activation energy barrier, Fig. 4.10. Activation energy can also arise due to particleinterface electrostatic interactions, or between the incoming particles and already adsorbed particles at the interface.

The adsorption energies of JNPs at interfaces can be calculated by first measuring the contact angle of the JPs with the solvents from the two phases, adopting a geometric model and performing the calculations as shown in Sect. 4.5.1. For complex JP geometries such as dumbbell shape or disc shape, the contact angle with each liquid phase is much more difficult to determine because it greatly increases the complexity of the geometric model in Sect. 4.5.1, adding severe uncertainties mainly due to many possible orientations of particles at the interface, the exact value of the angle of the long axis of the particle with respect to the surface. A much simpler way is to directly calculate the adsorption energy from the IFT vs. time data using pendant drop tensiometry. In pendant drop tensiometry the IFT drop with time can be monitored at oil–water interface for a long time. Same measurements can be performed at the air–water interface but have the disadvantage of the liquid evaporation leading to relatively shorter observation times. Such measurements are universally applicable to any interfacially active compound, for all types of particles and surfactants. The IFT vs. time curves data represent the starting point and opportunity to apply different kinetic models. The same kinetics models that apply to HPs [44] also apply to JPs without restrictions. The dynamic IFT measurement typically stops when IFT remains constant over time, that is, a plateau equilibrium value of the interfacial tension, *γp*, has been reached, Fig. 4.7.

Bulk particle concentration can influence the magnitude of the IFT until the interface becomes fully saturated. Determining the maximum IFT achieved at the highest concentration of particles is an important parameter for calculating the interfacial adsorption energy of the particle. A typical evolution of the IFT vs. time function of concentration is given in Fig. 4.11 and corresponds to PS-PDIPAEMA/P(3-TSPM) JPs at heptane–water interface [62]. Notice that *γp* remains constant above a concentration of 10 mg/mL PS-PDIPAEMA/P(3-TSPM)-1 JPs meaning that the maximum IFT was reached at this concentration. In addition,

**Fig. 4.11** IFT vs. time curves of the heptane–water interface in the presence of PS-PDIPAEMA/P(3-TSPM)-1 JPs at pH = 2: (**a**) 20 mg/mL, (**b**) 10 mg/mL, (**c**) 1 mg/mL, (**d**) 0.1 mg/mL and HNPs (**e**) at 10 mg/mL. Each data point is the average of three independent measurements and the error bars in grey represent the standard deviation. The data was acquired at 21 ◦C. Reprinted with permission from Ref. [62]. Copyright 2017 American Chemical Society

in the same figure the dynamic surface tension of JPs is compared to that of the HPs of the same composition and size as each of the Janus lobe and can be concluded that the latter are considerably less efficient than the JPs at lowering the IFT, in agreement with other similar findings of Fernández-Rodríguez et al.[31], Glaser et al. [45]. This demonstrates that the amphiphilicity of Janus particles enhances the interfacial activity of the particles.

Dinsmore et al. [64] proposed that the lowest *γp*, reached when the interface is fully saturated with particles, can be used to calculate the interfacial attachment energy *E*:

$$
\Delta E = -(\wp\_0 - \wp\_p)\pi R^2/\eta,\tag{4.38}
$$

where *γ*<sup>0</sup> is the IFT for the initial concentration of particles adsorbed at the interface, Fig. 4.7, and *R* is the radius of the particles. Analysing the dynamic IFT measurement curves in Fig. 4.11 for the polymeric JPs adsorbing at the heptane– water interface, Wu and Honciuc [62] calculated energy of attachment *E* using Eq. (4.38) and the obtained values are compared to the values obtained at heptane– water, toluene–water and air–water interfaces that are summarised in Table 4.2. Interestingly the *E* values for the JPs are larger than those of HPs at the same interfaces within one order of magnitude, larger than those predicted by the calculations of Binks and Fletcher [13].

The *activation energy of adsorption* can be determined from the same IFT vs. time curves. As already mentioned the adsorption kinetics of any particle at interfaces could be diffusion controlled, energy barrier controlled or a combination of two [65–67]. The adsorption kinetics of HPs measured via pendant drop dynamic IFT measurements are typically modelled using Ward and Tordai theory [68], which considers that adsorption is controlled by the particle's concentration and bulk diffusivity followed by instantaneous adsorption at the interface. However, in the presence of an energy barrier, the adsorption at the interface is much slower than those predicted by purely diffusive models of Ward and Tordai. In order to account

**Table 4.2** Activation energies of attachment of the PS-PDIPAEMA/P(3-TSPM) JPs and PS-PDIPAEMA HPs at toluene–water, heptane–water and air–water interfaces and their diffusivity, effective vs. actual


Reprinted and adapted with permission from Ref. [62]. Copyright 2017 American Chemical Society

for this discrepancy Liggieri et al. [65] and Ravera et al. [66] proposed the effective diffusion model that includes an activation energy barrier. In other words not all the particles that arrive at the interface via diffusion also adsorb at the interface. Some particles that have a low kinetic energy are not able to overcome the potential barrier for surface adsorption and will diffuse back into the bulk, Fig. 4.7. The effective diffusion model enables the calculation of the activation energy barrier from the observed effective diffusion coefficient from the IFT vs. time data. The *D*eff can be determined from the IFT vs. time data using the following equation [67]:

$$\chi = \chi\_0 - 2N\_A C\_0 \Delta E \sqrt{\frac{D\_{\rm eff} t}{\pi}},\tag{4.39}$$

where *C*<sup>0</sup> is the concentration of particles in bulk, *γ*<sup>0</sup> is the surface tension of the clean interface and *E* is the attachment energy calculated with Eq. (4.38). By fitting the earlier portion of the IFT vs. <sup>√</sup>*<sup>t</sup>* time curves one can calculate the *<sup>D</sup>*eff. Fitting only the earlier portion of the curves is justified by the fact that the incoming particles meet a pristine interface in the first stage of adsorption, at a later time the electrostatic repulsion between the adsorbed and incoming particles dominates, Fig. 4.7 [67]. The obtained effective diffusion coefficient *D*eff can be compared with the ones calculated from the Stokes–Einstein equation:

$$D\_0 = \frac{k\_B T}{6\pi\mu R},\tag{4.40}$$

where *μ* is the viscosity of water and *R* is the hydrodynamic radius of the particle. The *D*eff is typically much lower than the Stokes–Einstein diffusivity if an energy barrier is indeed present. Basavaraj et al. [67] obtained differences between *D*eff *vs. D*<sup>0</sup> as large as three orders of magnitude for 10 nm silica particles at dodecane– water interface. The activation energy for attachment can be further calculated from the equation:

$$D\_{\rm eff} = D\_0 \exp\left(\frac{-E\_a}{k\_B T}\right),\tag{4.41}$$

where *Ea* is the activation energy of attachment at interfaces. The calculated values of *E*, *γp*, *D*eff and *D*<sup>0</sup> and *Ea* for PS-PDIPAEMA/P(3-TSPM) JPs and the PS-PDIPAEMA HPs at three interfaces are compared in Table 4.2, whereas P(3-TSPM) HPs are not interfacially active. A quick inspection of the data shows that the *D*eff effective diffusion coefficient is three orders of magnitude lower in all cases than for the Stokes diffusion coefficient calculated with Eq. (4.40), which can only be explained by the existence of an activation energy barrier. The *Ea* values are the largest for the air–water interface and the lowest for the adsorption at the heptane– water interface which can be explained in part by the good ability of heptane to "wet" and replace the water hydration layer from the JP surface, while at the air– water interface the high cost of JPs' surface dehydration remains uncompensated. Further, the value of *E* for JPs than HPs, up to ten times at the heptane–water interface and up to three times at the air–water interfaces is larger than the three times upper limit predicted by the calculations of Binks and Fletcher [13]. The reason for this discrepancy may lie in the shape of the particle, which is snowman type while Binks and Fletcher have treated a perfectly spherical shaped JP.

# **4.6 Pickering Emulsions: Arrested JPs at Interfaces**

Emulsions are mixtures of two immiscible liquids, typically oil and water, that find many applications ranging from food, cosmetics, pharmaceuticals and enhanced oil recovery. The emulsions destabilise via coalescence and Ostwald ripening resulting in phase separation, therefore enhancing their stability and extending the shelf-life of emulsion-based products represents an important topic for research and development. Pickering emulsions are emulsions stabilised by particles and have been named after the British chemist and horticulturist S.U. Pickering who first discovered them in 1907. The particles adsorb at the interface between oil– water and prevent the formed droplets from coalescing. When producing Pickering emulsion typically external energy input is required in form of mechanical stirring or ultrasonication. One advantage of the Pickering emulsions as compared to standard surfactant emulsions is their stability. Pickering emulsions can be used in a variety of applications from, drug delivery, scaffolding materials for tissue and bone growth, environmentally responsive materials, catalysis [69]. Yang et al. gave a general description of Pickering emulsions and their applications [69].

A variety of homogeneous and asymmetric nanoparticles can be used for stabilising and forming Pickering emulsions. Particles are first dispersed in one of the phases, but typically in water, and then oil is added and then high shear forces are applied, either by ultrasonication, shaking or high-power stirring. For particles that exhibit good interfacial activity the external energy input can also be lower and gentle shaking by hand may suffice for producing emulsions. It is important to note that when external energy is applied via ultrasonication or shearing forces, particles acquire high kinetic energy, easily overcome the activation energy barrier to interfacial attachment as depicted in the cartoon in Fig. 4.10. Due to the energy input one of the phases becomes dispersed forming droplets whose interface is then "bombarded" by particles and becomes quickly saturated. The particles are irreversibly trapped at the interfaces, because the desorption energy is very high, some refers to this as arrested particles at the interfaces or arrested adsorption.

Depending on the affinity to one phase or the other, oil-in-water (o/w) or waterin-oil (w/o) emulsions can be obtained. The phase of the emulsion is determined by the particles' affinity to one phase or the other according to Finkle et al*.*[70] and similar to the Bancroft rules [71]. For example, hydrophobic carbon black particles are more likely to form w/o emulsions than the silica particles, due to their higher affinity to the apolar phase than to water [72]. Affinity of the particle to one of the phases translates into a preferred immersion depth into one phase or the other, changing in this way the curvature of the interface toward one phase or the other, as depicted in Fig. 4.12. Affinity of a particle to the interface has to do with its

**Fig. 4.12** Cartoon depicting the emulsion phase as function of the immersion depth (affinity) of a particle into the oil phase or the water phase (left) formation of o/w emulsions when the affinity of the particles is greater for water; (right) formation of w/o emulsion when the affinity of the particles is greater for oil

wettability, contact angle and eventually the immersion depth at the interface. If the immersion depth in one of the phases is stronger than the curvature of the interface will be such that the dispersed phase becomes the phase in which the particles are least immersed.

Why choosing JPs over HPs for Pickering emulsions? It has been demonstrated that the JPs are more interfacially active than the HPs due to their amphiphilicity. From the thermodynamic point of view the JPs stabilised emulsion are energetically more favourable than the HPs due to the positive line tension acting at the threephase line oil–water–particle [73]. In addition, the surface polarity of HPs can be hard to control by surface chemical modification, often involving surface capping agents that are themselves surface active and do interfere in emulsification ability. It is for this reason why the surfactant-free JPs are more attractive than HPs in emulsification and other interfacial applications, because of the ability to precisely and gradually tune their overall polarity. Their surface energy can be varied by changing the aspect ratio between the lobes of different polarities to the desired conditions without using modifying agents like surfactants.

For example, a homologous series of five nano-sized PS/P(3-TSPM) JPs with different relative lobe sizes were tested for their emulsification ability of different volumetric ratios of heptane:water mixtures (heptane is a purely apolar liquid)[74]. Photographs of the emulsions obtained with these JPs series and the corresponding fluorescence microscopy images are presented in Fig. 4.13, whereby the top row depicts the SEM images of each particle in the homologous series; the first particle is PS HPs from which the second Janus lobe (brighter lobe) was generated, in addition the oil phase is fluorescent and the dark phase is water. The yellow line delimitates the boundary at which the emulsion phase inversion from w/o (top of the line) to o/w (bottom of the line) emulsion takes place. The horizontal yellow line depicts a transitional emulsion phase inversion that depends on the polarity of

**Fig. 4.13** Formulation—composition maps with photographs of emulsions in glass vials and their corresponding fluorescence microscopy images (scale bar is 400 nm) obtained with PS/P(3-TSPM) JPs. The top row depicts seed HPs and five PS/P(3-TSPM) JPs with increasing P(3-TSPM) lobe sizes (scale bar is 200 nm), while the subsequent three rows represent a different volumetric ratio of heptane to water and the six columns represent the emulsification results from each particle. The yellow line indicates the w/o and o/w emulsion phase boundary; the vertical arrow indicates the catastrophic and the horizontal the "static" transitional phase inversion. The fluorescent phase is the oil phase and the dark phase the water. Reprinted with permission from Ref. [1]. Copyright 2016 American Chemical Society

the particle, its affinity to one of the phases and eventually its immersion depth according to the cartoon in Fig. 4.12. This is the principle behind creating stimuliresponsive emulsions, discussed later. The vertical arrow indicates a catastrophic phase inversion that depends on the volume oil:water; when the ratio of one of the phases is considerably lower than the other phase, then the probability that this becomes the dispersed phase is higher. Note that the catastrophic phase inversion does not affect the particle immersion depth at the interface. From the results in Fig. 4.13 it is clear that the HPs (first column) are apolar, because they are only capable of forming w/o Pickering emulsions for all heptane:water ratios. A transitional emulsion phase inversion from w/o to o/w takes place in the middle of the homologous series, meaning that with the growth of the P(3-TSPM) the JPs become more polar and the affinity of the largest lobe JPs is greater toward water.

Other types of oils differing in their polarity and viscosity can be emulsified in water with JPs. Monomers, fragrant oils, polymers and organic solvents can be emulsified into Pickering emulsions. Further, Honciuc et al. [42] have shown that by changing the polarity of the emulsified oil the interfacial energy of the particles with the oil and water can be estimated. If monomers are used instead as oils, the Pickering emulsion can be subsequently polymerised resulting in solidstate polymers with nanostructured surfaces, see Fig. 4.14, and other advanced materials [42].

**Fig. 4.14** (**a**) Polystyrene/JNP colloidosomes resulting from the polymerisation of a styrene-inwater emulsion obtained with PS/P(3-TSPM) JPs. Reprinted with permission from Ref. [42]. (**b**)- (**d**) zoomed in surface regions of the colloidosome, showing tight packing of the JPs monolayer. Copyright 2017 American Chemical Society

*Stimuli-responsive Pickering emulsions* can also be designed by using stimuli responsive particles. Upon adsorption of particles at interfaces the emulsion generated acquires the functionality of the populating particles. For example, Tu and Lee [38] have created stimuli-responsive Pickering emulsions from PS/PAA JPs which are capable of phase inversion due to deprotonation at high pH of the -COOH groups and thus polarity of the particle increases and becoming capable of forming o/w emulsions.

In a different example, Pickering emulsions stabilised by PDIPAEMA/P(3- TSPM) JPs it was also possible to induce an emulsion phase inversion by changing in situ the pH value of the water phase below and above the pKa value of the - NR3 groups at the surface of the PDIPAEMA JP lobe, Fig. 4.15. When the pH is changed in situ and an already formed Pickering emulsion inverses its phase it is called a dynamic transitional emulsion inversion [72] in contrast to static transitional emulsion inversion that assumes the preparation of the emulsion at the given pH. Such pH-responsive Pickering emulsion could be employed in encapsulations and triggered release applications.

The use of Pickering emulsions in phase selective catalysis demonstrates the potential advantages and opportunities offered by the asymmetric architecture of the JPs. A conclusive example is that of Resasco et al. [75] which have produced JPs and loaded them with Pd nanoparticles selectively only on the hydrophobic side to produced Pd/JPs and non-selectively deposited everywhere to produce HPs. Next with these two types of particles they have created Pickering emulsions from decalin and water. The decalin phase contained benzaldehyde that was insoluble in

**Fig. 4.15** Pickering emulsions stabilised by PDIPAEMA/P(3-TSPM) JPs showing dynamic emulsion phase inversion with the pH: (**a**) as prepared o/w Pickering emulsion (toluene:water = 4:5 ratio, pH = 4.0) changing to w/o after addition of base; (**b**) as prepared w/o Pickering emulsion (toluene:water = 4:5 ratio, pH = 10) changing to o/w after addition of acid. On top photographs of the vials containing the Pickering emulsions, with 0.1% hydrophobic dye and bottom the fluorescence microscopy images showing the corresponding Pickering emulsion type (scale bar = 200μm). Reprinted with permission from Ref. [62]. Copyright 2017 American Chemical Society

water and the water phase contained glutaraldehyde that is insoluble in oil. Next, the two emulsions were hydrogenated and surprising results were obtained: when the catalyst contained Pd on both sides of the Janus particles, high conversion levels were obtained for both reactants, about 80% for glutaraldehyde in the water phase and 100% for benzaldehyde in the oil phase. However, when the catalyst had Pd selectively deposited on the hydrophobic side, the conversion of benzaldehyde was kept at 100%, while the conversion of glutaraldehyde decreased to 2%, demonstrating high phase selectivity, of JP stabilised Pickering emulsions. Similarly, Liu et al. [76] used Au nanoparticle modified SiO2/PS-PDVB of the JPs as interfacial catalysts for the catalytic reduction of 4-nitroanisole to 4-aminoanisole.

# **4.7 Self-Assembly of Janus Particles**

Similar to molecular surfactants JPs can also self-assemble into suprastructures. The key parameters behind the self-assembly of JPs are the right balance between the repulsive/attractive forces and their geometry that greatly influences the type of suprastructures formed [77, 78]. It is well known that HPs can assemble into colloidal crystals with iridescent appearance and find use in photonics [79, 80], electronics [81, 82], catalysis [83], (bio)sensing [84, 85], etc. In contrast, JPs can give rise to a larger variety of self-assembled suprastructures [86], such as trimmers [87], spherical micelles [88], capsules [89] and crystals [90], which can be of great importance for obtaining novel reconfigurable materials and assemblies at non-equilibrium also referred to as "active matter" that can perform different functions [91]. The different variety of suprastructures that can be formed arise from JPs anisotropy due to geometrical or topological constraints [92, 93], which affect their interaction and packing. JPs may find analogues in nature such as large proteins that exhibit polarity domains on their surface and exhibit anisotropy. Anisotropic proteins form self-assembly structures with precise morphology and specific functional role [94]. The ability of JPs to self-assembly into complex and regular structures that show unusual and reconfigurable properties [91] is a remarkable property and is one of the main driving motivations for further research in this area [93].

The particle–particle interaction drives the self-assembly of JPs in monophasic solvent, but their assembly can also be mediated by a minority liquid in a biphasic system, such as, for example, by placing spherical Au/SiO2 JPs in a binary system of water/2,6-lutidine binary fluid leading to 2D and 3D clusters or zig-zag chains [95]. Other examples of JPs self-assemblies mediated by liquids in water/n-dodecane mixtures; in this case water was the minority liquid, mediating the self-assembly formation via capillary forces due to the formation of water bridges between particles. By progressively increasing the fraction water, a variety of structures from micelles, worm-like micelles and spherical emulsion droplets of spherical Au/SiO2 JPs were obtained, Fig. 4.16 [96]. Eventually, when the minority liquid is large enough these transform into colloidosomes of Pickering emulsions, Fig. 4.16g–j; **Fig. 4.16** Optical microscope images showing *α*-dependence of the morphology in self-assembled structures, where *α* is the fraction of water (minority liquid) added in n-dodecane. (**a**–**i**) Optical microscope images of typical structures formed at respective *α*. (**a**) Random aggregate. (**b**) Small micelle-like cluster. (**c**–**e**) Rod-shaped micelle-like clusters. (**f**) Structure observed at a value of *α* where rod-shaped micelle-like clusters and spherical droplets coexist. (**g**–**i**) Spherical droplets in emulsions. (**i**) Hemispherical droplet attached to the bottom of the observation cell. (**j**) Magnified image of the framed region in (**i**). (**k**) Diagram of the *α*-range of the observed structures. The scale bars are 5μm in (**a**) and (**b**), 10μm in (**c**)–(**h**) and (**j**) and 50μm in (**i**). Reprinted with permission from Ref. [96]. Copyright 2017 American Chemical Society

in addition the JPs are oriented with their polar lobe toward water. Hu et al. [97] showed that by further chemical alteration of the metallic side linkable Janus metalorganosilica particles capable of forming dimers and trimers can be created.

Because JPs can adsorb at the air–water interface, they can also act as gas bubble stabilisers and thus air bubbles can act as templates for the self-assembly. Gas bubbles and emulsions have similar properties, e.g. the gas can be thought of as a highly hydrophobic fluid. For example, Fujii et al. [98] obtained large mono-walled vesicles from Au/SiO2 JPs; the orientation of JPs in the walls could be changed by chemical modification of the Au lobe with different polymers, PS, PPFBEM (Fig. 4.17).

**Fig. 4.17** SEM images of bubbles stabilised by the (**a**, **d**) Au-SiO2, (**b**, **e**) PS-g-Au-SiO2 and (**c**, **f**) PPFBEM-g-Au-SiO2 Janus particles. Panels **d**–**f** are magnifications of panels **a**–**c**, respectively. Reprinted with permission from Ref. [98]. Copyright 2017 American Chemical Society

# **4.8 JP-Based Nanomotors**

Self-propelled, active colloidal systems are of great fundamental interest with potential applications in nanomachineries, nanoscale assembly, catalysis and sensing [99]. Due to their asymmetry JPs possess the right architectures for making nanomotors [100] and the ways to power these externally have been extensively discussed by Shields and Velev [100]. For self-propulsion a motor needs fuel and in order to propel a Janus particle one side of it must be made apt for propelling the particles. One way this can be achieved is by making one lobe of the JPs from a metal that catalyses the decomposition of the H2O2 on its surface, such as Pt; the resulting decomposition products O2 and H2O act as propelling jet for the JPs. In this way the motion of the particle deviates from that of a pure Brownian motion and enhances the diffusivity of the particle in the bulk solution. The self-propelled JPs are also capable of transporting a cargo. Sanchez et al. [101] coated mesoporous silica nanoparticles on one side by evaporation of 2 nm thin Pt metal. The resulting JPs exhibited an enhanced diffusion coefficient of up to 100% and Rhodamine B could be loaded into the pores of the silica nanoparticles. The presence of H2O2 may limit however their application in vivo for drug delivery. Wang et al. [102] used a different fuel/metal system by deposition Ir-metal on one side of silica particles and using hydrazine (well-known monopropellant for rocket motors); N2, H2 and NH3 molecules are generated at the Ir surface as a result the JP motor moves unidirectionally in the direction of the silica face. The group of Joseph Wang also showed that the capabilities of cargo transport of JP self-propelled motors are not

**Fig. 4.18** Self-organised cargo loading. Time-lapse images (over 2 s) of the track lines of motor/nonmotor assemblies: transport of 3 (**a**) and 6 (**b**) nonmotor spheres (taken from SI Video 7). Scale bar, 5μm. Reprinted with permission from Ref. [103]. Copyright 2013 American Chemical Society

only limited to small molecules but to other particles; using H2O2 fuel and Pt/SiO2 JPs hydrophobised on the SiO2 side with OTS, they could observe that JP motors can anchor HPs via hydrophobic interaction then transport them unidirectionally to a different location [103]. Impressively, one JP motor could transport up to six HPs. The mechanism of self-propulsion is well understood [104] and is achieved due to an asymmetric catalytic reaction occurring on the surface of the JPs. The role of the geometric shape in the unidirectional propulsion of JPs was also discussed [105]. By trapping Pt/SiO2 JPs at the air/water surface Stocco et al. [106] were able to enhance the unidirectional motion of JPs as compared to bulk due to slowing down of the rotational diffusion at the interface and constraining it into a rotational well [36].

JPs' capabilities to self-assemble and unidirectional motion can be combined to achieve very unique operational functions, such as particle cargo applications [102, 103]. In Fig. 4.18a it is depicted the asymmetric cluster formation between a JP and three HPs; the three HPs are all attached to the alike JP lobe. The dark lobe catalyses a chemical decomposition reaction that is able to cargo the other particles and push them unidirectionally to a different location; one JP can transport up to 6 HPs, Fig. 4.18b.

# **4.9 Conclusions**

Janus particles demonstrated a net superiority when compared to HPs in terms of interfacial behaviour and are therefore attractive for use in a plethora of interfacial applications. Stabilisation of Pickering emulsions, stabilisation of gas bubbles and foams are typical applications in which JPs have revealed their versatility. Further, due to their asymmetric architecture JPs open new horizons for particle applications. A few such possibilities have been already demonstrated, such as the self-assembly into reconfigurable suprastructures, paper display applications of bi-coloured JPs or cargo loading operation and transport of up to six particles by a single JP from point A to point B through chemically powered unidirectional motion. The future application potential of the JPs appears limitless and further research may uncover even more extraordinary functions. The strength of these particles lies in the ability to carry different and often contrasting properties and functionalities on each of their lobes. These properties can be bulk-like properties or surface properties. For example, one can combine electric, magnetic and optic properties on each Janus lobes in addition to amphiphilicity, a surface property [6]. By doing this one can couple surface and bulk properties and combine them in surprising new ways. Never before, has the application potential of particles been more exciting than the one opened by the Janus particles.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 Upscaling Flow and Transport Processes**

**Matteo Icardi, Gianluca Boccardo, and Marco Dentz**

# **5.1 Introduction**

Countless environmental, industrial and biological applications involve fluids flowing through complex media or heterogeneous environments. These can be soil, sand and rocks in aquifers and reservoirs, industrial separation and filtration devices, biological membranes and tissues, composite materials. Although all the fundamental laws and modelling approaches of fluid dynamics still apply, a completely different perspective has to be taken to deal with geometrical and physical complexity and multiscale structure of the underlying media. This is generally done by means of upscaling or averaging techniques, not different than the ones used to deal with the multiscale structure of turbulence. The main difference between flows through porous media and turbulent flows lies in the fact that the latter is an emerging phenomena purely due to the nonlinearity of the Navier–Stokes equations, while the former inherits its multiscale complexity directly from the geometrical and physical properties of the material. This means that, even starting from linearised or simplified flow regimes (e.g., Stokes), interesting emergent macro-scale dynamics can appear due to these properties. The ultimate scientific challenge is to develop a quantitative link between the properties of the media and the upscaled parameters in the macroscopic dynamics. Although a wide range of these emerging dynamics are more easily observed and studied than the turbulent structures (which, by nature, are

M. Icardi (-)

M. Dentz IDAEA, CSIC, Barcelona, Spain

© The Editor(s) (if applicable) and The Author(s) 2019

School of Mathematical Sciences, University of Nottingham, Nottingham, UK e-mail: matteo.icardi@nottingham.ac.uk

G. Boccardo DISAT, Politecnico di Torino, Torino, Italy

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_5

hardly reproducible),<sup>1</sup> the high dimensionality of the set of all possible geometrical structures makes a systematic research and a predictive model development (e.g., closures, parameter estimation, etc.) particularly challenging. As turbulence models are naturally first developed and tested on clearly defined scenarios (such as the periodic isotropic turbulence or wall-bounded flows), similarly upscaled porous media models have traditionally been derived from simple granular materials, such as sphere packings. While the former usually assume the existence of a continuum of length scales, as dictated by the classical turbulence theory, the latter typically rely on clearly defined and well-separated scales (usually two or possibly more). These assumptions, however, are only very crude approximations of the actual natural and engineered media, and can lead to significantly misrepresent the overall transport processes.

In this chapter, while presenting the fundamentals of flow and transport through porous media (intended in the classical sense) and some of the specific methodologies and challenges, we take a more general point of view, focused on the underlying upscaling procedures and their assumptions, to help smooth out the still existent barrier between porous media and fluid dynamics research.

# **5.2 Flow Through Porous and Heterogeneous Media**

As already mentioned, although the concept of upscaling and averaging is present in many fluid dynamics problems, particularly relevant to many applications is the understanding of the emerging dynamics of fluid flowing through multiscale (porous) materials. As we will discuss in Sect. 5.2.1, the peculiarity of this problem is due to the presence of large surface areas where no-slip conditions generate, in the first approximation, linear damping in the momentum equation, proportional to an effective parameter known as *permeability* of the media. However, natural porous materials, such as soil and rocks, can have a highly irregular and heterogeneous structure, causing this emerging effect to be significantly space-dependent. Due to the limited a priori knowledge of the exact geology, this space heterogeneity is often modelled as a random field. This gives rise to another important upscaling problem, namely understanding the effect of meso- and macro-scale heterogeneities in the permeability. This is discussed in Sect. 5.2.3.

In both cases, crucial to the upscaling process is the solution of a closure problem, solved on a *representative elementary volume* (REV). This can be understood at different levels. The first simple definition of REV can be based solely based on geometrical information such as the porosity of the material, *φ*, i.e., the volume fraction of void space available for the fluid. This, being a very simple averaging process, can be computed on different length scales , i.e., *φ* = *φ()*. For increasing

<sup>1</sup>For example, with the recent development of 3D printing, a wide range of porous media structures can be synthetically recreated and tested.

, if the media has only finite-size heterogeneities (or well-separated scales) and no fractal structure, this converges to a finite number 0 *< φ*<sup>0</sup> *<* 1. This implicitly defines a minimal REV of size *φ*. Even assuming the existence of this well-defined REV, the actual upscaling of transport and flow processes involves the average of fluctuating quantities (or closure variables), which are solutions of a differential model. This might require a much larger size to converge to a constant. For periodic structures, in the assumption of stationary (fully developed) profiles and local equilibrium (imposed through the pseudo-periodicity of the variables), the periodic cell represents not only the geometric REV but also the right REV for all processes. Relaxing these periodicity assumptions means allowing random "perturbation" in the material that results in a larger geometrical REV, and, possibly, in perturbation in the solution persisting over bigger scales. This means that, to obtain well-defined (e.g., space-independent) macroscopic effective parameters, the existence and size of the REV cannot be known a priori and could be significantly larger than the one obtained from purely geometrical considerations. This consideration applies not only to the upscaling of the flow discussed below (which could indeed need a REV much larger than the geometrical one) but also to the upscaling of transport and reaction processes (discussed in Sect. 5.3), and, more significantly to all non-linear and more complex models (such as multiphase flows).

# *5.2.1 Darcy's Law*

The earliest approaches to the study of flow in porous media were directed to the derivation of simple linear relation between pressure drop and superficial velocity, and implicitly made use of a macroscopic description of a continuous (pseudohomogeneous) fluid–solid domain. Henry Darcy, who investigated the sand filter system employed in the delivery of freshwater to the city of Dijon, first proposed this relation, now known as *Darcy's law*:

$$-\frac{\delta P}{L} = \frac{\mu}{K}q,\tag{5.1}$$

where *δP* is the integral pressure drop (or the so-called *pressure head*, including hydrostatic pressure) across the porous medium, *L* is its length and thus characterises flow in saturated porous media via its permeability, *K*, and the fluid superficial velocity *q*. This law, originally derived on purely phenomenological and experimental grounds, can be intuitively extended in three dimensions, as a force balance between pressure gradient and linear wall stresses, neglecting the transient and inertial term in an upscaled form of the Navier–Stokes equation:

$$-\mathbf{K}\nabla P = \mu \mathbf{q}\,,\tag{5.2}$$

where **K** can now be more generally a symmetric tensor, not necessarily isotropic, i.e., pressure gradients in one direction can possibly cause flow to happen in an arbitrary direction, due to non-symmetric porous structures. This result can also be rigorously derived using the tools of homogenisation or volume-averaging [1, 2], upscaling the incompressible Stokes' law to obtain Darcy's law. Equation (5.1), while still being useful in many porous media systems, has been found to have its limitations. The first is related to the relative magnitude of the superficial velocity *q*. More appropriately, and by analogy with the usual analysis of the laminar–turbulent flow transition, it can be expressed in terms of the Reynolds number where the system's characteristic length is the average grain diameter or pore width. As such, in the vast majority of cases, Darcy's law will find an upper range of validity at Re ranging from one to ten [3]. Other cases, where a more complex equation has to be used, include the already mentioned fractal porous media (where the permeability is no more a constant but a non-local kernel), multiphase flows, non-Newtonian fluids, non-equilibrium flows.

# *5.2.2 Extensions of Darcy's Law*

For high Reynolds numbers, the linear relationship expressed in Eq. (5.1) between superficial velocity and the hydraulic gradient (*δP /L*) ceases to be valid, making Darcy's law unsuitable for describing the nonlinearities arising under these conditions. Although there has been some controversies [4, 5] about the correct extension of Darcy's law to transitional and turbulent flows, the most commonly used equation that can be used to that end is the *Darcy–Forchheimer equation*:

$$-\frac{\delta P}{L} = \frac{\mu}{K}q + \beta \rho q|q|,\tag{5.3}$$

where *β* is the so-called inertial flow parameter and, like *K*, is independent of fluid properties and only depends on the microstructure of the porous medium. Various attempts at an explanation of this phenomenon have been made: the most intuitive of which would be to ascribe this nonlinearity to the onset of turbulence, by immediate analogy with the relationship between head loss and fluid velocity for the flow in pipes, which becomes non-linear right after the transition to the turbulent region corresponding to higher Reynolds numbers. The problem with this approach is that, while for the flow in pipes the laminar and turbulent zones are clearly identifiable, the transition in the case of flow in porous media is much smoother, with no clear separation between the two: this can be related to what is known for flow around spheres, where the same behaviour is found. A number of experiments were conducted in the past to identify the critical Reynolds number associated with the transition to the turbulent zone in porous media, and found it to be several orders of magnitude higher than the Re at which the nonlinearities begin to become apparent [6].

Beyond these difficulties caused by non-trivial changes in the fluid dynamic structure at the pore-scale when transitioning to high Reynolds numbers, there are also a number of other notable extensions, for which, brief pointers follow.

### **Multiphase and Unsteady Flows**

While single-phase flow in porous media (also known as *saturated*) are generally steady, a trivial extension is to add a time derivative term to model unsteadiness caused, for example, by time-dependent pressure boundary conditions. However, when dealing with multiphase flows, the time-dependence naturally appears. While density-driven miscible Darcy's flows are easily obtained, immiscible multiphase extensions of the Darcy's equation rely on much stronger assumptions (see also the discussion in Sect. 5.5). The simplest form of multiphase flow is the *Richards' equation* that describes a water plume travelling through a steady Darcian flow field.

### **Brinkman**

One early and well-known approach to bridge the gap between the free flow and the Darcy's descriptions was put forward by Brinkman, whose eponymous equation adds a viscous term to the Darcy's equation (usually with an effective viscosity which is not necessarily equivalent to the viscosity of the fluid). Rigorous derivations of the Brinkman equation with homogenisation have been proposed [1], with the contemporary presence of the Darcy and Brinkman terms, under a specific scaling for the geometrical properties of the porous structure. Furthermore, the Brinkman equation has found interesting applications as a unified numerical approximation (e.g., penalisation approaches [7]) that, for limiting cases, can recover both Stokes and Darcy equations.

### **Non-Newtonian**

When considering non-Newtonian fluids, the Darcy's law for low fluid velocities is still used, with a modified "porous medium viscosity" comprising the non-Newtonian effects. In the higher velocities ranges, the interplay between shearthickening effects and turbulent nonlinearities becomes more difficult to understand: both formal attempts of upscaling via volume-averaging [8] and accurate computational pore-scale simulations in reconstructed geometries [9] have been presented.

### **Knudsen**

Finally, the constant assumption of all the theory presented up until this point (and henceforth, excluding this paragraph) has been that of considering the fluid as a continuum and as such, to employ the mentioned *no-slip* condition on the solid matrix boundary. In practice, many real-world systems (e.g., rarefied gases, shale gas) are characterised by Knudsen flow, and are not treatable within the usual framework, leading to non-trivial additions of slip-flow corrections to effective permeability.

# *5.2.3 Heterogeneous Media*

As described in the previous section, the (space) averaged flow behaviour on the scale of a REV is described by the Darcy's equation. Here we consider the transition from the Darcy to larger scales of heterogeneity. For geological porous media, this means an order of metres or hundreds of metres. Here, spatial or ensemble (considering random realisations of geological structures) averaging, or a combination of both, can be used to study emerging macroscopic dynamics. We denote here both averaging operation with the bracket notation · and we limit our discussion to the steady state Darcy flow equation with heterogeneous medium properties

$$\mathbf{q(x)} = -\frac{K(\mathbf{x})}{\mu} \nabla P(\mathbf{x}), \qquad \qquad \nabla \cdot \mathbf{q(x)} = 0. \tag{5.4}$$

which is equivalent to a Laplacian equation for the pressure. This description implies that the flow field is helicity-free, i.e., **q***(***x***)*·∇×**q***(***x***)* = 0*.* This means that there are no closed streamlines in *d* = 2 dimensional Darcy flow. For *d* = 3 dimensions, zero helicity implies that streamlines are either closed or organised on two-dimensional toruses [10]. These topological properties prohibit chaotic flow and thus have an impact on the stretching of material lines and surfaces.

The systematic upscaling of flow and transport upscaling in heterogeneous porous media has been performed with stochastic approaches in order to model the spatial variability of permeability [11–13]. This is motivated, on the one hand, by the incomplete knowledge of the small scale fluctuations of *K(***x***)*, and on the other hand by the desire to identify the large scale behaviour due to "typical" spatial random fluctuations and quantify it in terms of only a few geo-statistical characteristics. This requires certain assumptions such as statistical stationarity and ergodicity. In this framework, the log-permeability *f (***x***)* = ln[*K(***x***)*] has been modelled as a multi-Gaussian random field, which implies that *K(***x***)* is a multi-lognormal random field. This can be understood as follows. Consider a set of *f (***x***)* values evaluated at positions **x***<sup>i</sup>* (*i* = 1*,...,n*) in the medium. The set {*f (***x***i)*} *n <sup>i</sup>*=<sup>1</sup> is modelled now as a spatial stochastic process, which is characterised by a joint Gaussian PDF characterised by the covariance matrix *Cij* = C*(***x***<sup>i</sup>* − **x***<sup>j</sup> )*,

$$\mathcal{P}(\{f(\mathbf{x}\_{l})\}) = \frac{\exp\left[-\sum\_{l,j=1}^{n} f(\mathbf{x}\_{l})C\_{lj}^{-1}f(\mathbf{x}\_{j})/2\right]}{(2\pi)^{n}\sqrt{\det(\mathbf{C})}}.\tag{5.5}$$

The variance of *f (***x***)* is given by *σ*<sup>2</sup> *ff* = C*(***0***)*. The covariance C*(***x***)* is typically modelled as a short-ranged function that decays on characteristic length scales, the correlation lengths. For an overview of common covariance models, see Refs. [11– 13]. A statistically isotropic medium is characterised by a single correlation scale . For anisotropic media, the correlation scale depends on the spatial direction.

In this framework flow is upscaled in terms of an effective permeability **K***<sup>e</sup>* tensor [14, 15], which is defined by

$$
\langle \mathbf{q}(\mathbf{x}) \rangle = -\frac{\mathbf{K}^e}{\mu} \langle \nabla P(\mathbf{x}) \rangle. \tag{5.6}
$$

Here we focus on statistically isotropic media, for which *K<sup>e</sup> ij* <sup>=</sup> *<sup>K</sup>eδij* . Note that *<sup>K</sup><sup>e</sup>* is in general not equal to the arithmetic average *K(***x***)*.

In the following, we first report on some exact results for the effective permeability for flow in layered media and two-dimensional multi-Gaussian permeability fields. Then we discuss briefly perturbation theory results and conjectures for threedimensional media.

### **Exact Solutions**

For layered porous media, permeability is constant along one of the coordinate axis and variable in the other directions. This means the correlation length is infinite along one coordinate axis. For simplicity, we consider the case of 2 spatial dimensions. For a pressure gradient parallel or perpendicular to the layering exact solutions for the effective permeability exist. The direction of the mean pressure gradient in the following is aligned with the *x*-direction.

For flow aligned with the direction of stratification, the flow problem has an exact solution, which is

$$\mathbf{q}(\mathbf{y}) = -K(\mathbf{y}) \langle \nabla P(\mathbf{x}) \rangle. \tag{5.7}$$

In this case, *<sup>K</sup><sup>e</sup>* <sup>=</sup> *KA* = *K(y)*, the effective permeability is equal to the arithmetic mean permeability. For flow perpendicular to stratification, the exact solution is

$$\mathbf{q}(\mathbf{x}) = -\left[\frac{\mu}{L} \int \frac{d\mathbf{x}'}{K(\mathbf{x}')}\right]^{-1} \langle \nabla P(\mathbf{x}) \rangle. \tag{5.8}$$

where *L* is the length of the flow domain. Here the effective permeability is given by the harmonic mean *<sup>K</sup><sup>e</sup>* <sup>=</sup> *KH* <sup>=</sup> <sup>1</sup>*/K(x)*−1.

For flow in isotropic two-dimensional multi-lognormal permeability fields with finite correlation length, the effective permeability is exactly given by the geometric mean [16, 17],

$$K^{\varepsilon} = K\_G \exp\left(-\langle f(\mathbf{x})\rangle\right). \tag{5.9}$$

This result can be derived based on a duality between stream function and flow potential using the fact that both *K(***x***)* and 1*/K(***x***)* are lognormal distributed.

### **Perturbation Theory**

For three-dimensional heterogeneous porous media, the duality argument invoked for two dimensions does not hold. Thus, the effective permeability has been determined using perturbation theory in the fluctuations of log-permeability about its mean value, *f (***x***)* = *f (***x***)* − *f (***x***)*, which gives [18–20]

$$K^{\epsilon} = K\_G \left( 1 + \frac{\sigma\_{ff}^2}{6} \right),\tag{5.10}$$

which is strictly valid only for *σ*<sup>2</sup> *ff* 1. For larger values of *<sup>σ</sup>*<sup>2</sup> *ff* and *d* spatial dimensions, Matheron [18] conjectured the expression

$$K^{\epsilon} = K\_G \exp\left[\sigma\_{ff}^2 \left(\frac{1}{2} - \frac{1}{d}\right)\right],\tag{5.11}$$

which for *<sup>d</sup>* <sup>=</sup> 2 gives the exact result *<sup>K</sup><sup>e</sup>* <sup>=</sup> *KG* and in *<sup>d</sup>* <sup>=</sup> 3 and small *<sup>σ</sup>*<sup>2</sup> *ff* 1 is consistent with the perturbation theory result Eq. (5.10). The effective permeability is bounded between the harmonic and arithmetic mean, *KH* <sup>≤</sup> *<sup>K</sup><sup>e</sup>* <sup>≤</sup> *KA*.

# **5.3 Macroscopic Transport Models**

Transport in heterogeneous media can be described by the advection–dispersion equation

$$\frac{\partial c(\mathbf{x},t)}{\partial t} + \nabla \cdot \mathbf{v}(\mathbf{x})c(\mathbf{x},t) - \nabla \cdot \left[\mathbf{D}(\mathbf{x})\nabla c(\mathbf{x},t)\right] = 0. \tag{5.12}$$

At the pore-scale, the velocity field **v***(***x***)* is obtained from the Stokes equation and the dispersion tensor reduces to **D** = *DmI* , where *Dm* is the molecular diffusion coefficient and *I* the identity matrix. At the Darcy scale, a similar equation can be derived by homogenisation or volume-averaging [1, 2], with the flow velocity given by **v***(***x***)* = **q***(***x***)/φ*, where *φ* is porosity, which here is assumed to be constant, and the dispersion tensor **D***(***x***)* given by the solution of a closure problem. Alternatively, dimensional and phenomenological arguments can lead to the following parameterisation [21, 22]:

$$D\_{lj} = \alpha\_0 D\_m \delta\_{lj} + \sum\_{k,l=1}^d \alpha\_{ljkl} \frac{q\_k(\mathbf{x}) q\_l(\mathbf{x})}{\|\mathbf{q}(\mathbf{x})\|},\tag{5.13}$$

where *α*0*Dm* is the effective diffusivity (see Sect. 5.5), and *αijkl* are geometrical dispersivities. For an isotropic medium, the *αijkl* are given by

$$
\alpha\_{ljkl} = \alpha\_{II}\delta\_{lj}\delta\_{kl} + \frac{\alpha\_I - \alpha\_{II}}{2} \left(\delta\_{lk}\delta\_{jl} + \delta\_{ll}\delta\_{jk}\right). \tag{5.14}
$$

This description of dispersion is valid at high Péclet numbers. The Péclet number compares the relative strength of diffusive and advective transport mechanisms and is here defined as Pe = *V L/D*, where *L* is a characteristic heterogeneity length scale and *V* a characteristic velocity.

The advection–dispersion Eq. (5.12) is equivalent to a Ito's stochastic differential equation [23, 24] for the position **x***(t)* of a solute particle

$$\frac{d\mathbf{x}(t)}{dt} = \mathbf{v}[\mathbf{x}(t)] + \nabla \cdot \mathbf{D}[\mathbf{x}(t)] + \sqrt{2\mathbf{D}[\mathbf{x}(t)]} \cdot \xi(t),\tag{5.15}$$

where *ξ (t)* is a Gaussian white noise of zero mean, *ξ (t)* = **0** and covariance *ξi(t)ξj (t )* = *δij δ(t* − *t )*. Equation (5.15) is the starting point for random walk particle tracking simulations for the solution of advective–dispersive transport in heterogeneous porous media.

A key issue for transport in heterogeneous media is to quantify the transport behaviours on a scale larger than the characteristic heterogeneity scale. For the transition from pore to Darcy scale, the observation scale is larger than the characteristic pore length, for the transition from Darcy to regional scale, it is larger than the correlation scale of permeability. In the following, we briefly report on upscaling efforts in terms of Fickian transport formulations, the occurrence of anomalous dispersion and modelling approaches to account for non-Fickian transport.

# *5.3.1 Fickian Dispersion*

Before discussing Fickian large scale transport formulations, we briefly summarise some signatures of Fickian transport for one-dimensional transport at constant velocity *v*<sup>0</sup> and diffusion coefficient *D*0. Firstly, for a point-like solute injection, the concentration distribution is Gaussian shaped as

$$c\_0(\mathbf{x}, t) = \frac{\exp\left[-\frac{(\mathbf{x} - v\_0 t)^2}{4D\_0 t}\right]}{\sqrt{4\pi D\_0 t}}.\tag{5.16}$$

The first and second centred moments of *c(x, t)*, denoted by *m(t)* and *κ(t)* evolve linearly in time as *m(t)* = *v*0*t* and *κ(t)* = 2*D*0*t*. Solute *breakthrough*, i.e., the distribution of solute arrival times at a control plane located at the distance *x* from the plane at which solute is injected, is given by the inverse Gaussian distribution

$$f\_0(t,x) = \frac{x \exp\left[-\frac{(x-v\_0t)^2}{4D\_0t}\right]}{\sqrt{4\pi D\_0t^3}}.\tag{5.17}$$

### **Hydrodynamic Dispersion**

As already mentioned, the upscaling of transport from the pore to the Darcy scale can be approached by stochastic approaches [25], spatial averaging and homogenisation. Under the assumption of local physical equilibrium, these approaches derive for the (homogeneous) Darcy scale the advection–dispersion Eq. (5.12). The hydrodynamic dispersion tensor **D** accounts for the impact of molecular diffusion and pore-scale velocity fluctuations on Darcy-scale solute transport. It is in general a function of the Péclet number. This has been observed both for the longitudinal, i.e., the mean flow direction, and the transverse dispersion coefficients *DL* and *DT* . For Pe 1, *DL/D* <sup>∼</sup> 1, for 1 *<sup>&</sup>lt;* Pe *<sup>&</sup>lt;* Pe*c*, it behaves as *DL/D* <sup>∼</sup> Pe*<sup>γ</sup>* , with 1 *<γ <* 1*.*5, and for Pe *>* Pe*<sup>c</sup>* it scales as *DL/D* ∼ Pe. The critical Péclet number is Pe*<sup>c</sup>* ≈ 400 − 500 [26, 27]. These behaviours can be described by the expression [22]

$$D\_L = D\alpha + \alpha\_I \overline{v} \frac{\text{Pe}}{\text{Pe} + 2 + 4\delta^2},\tag{5.18}$$

where *α* accounts for the effect of the tortuous pore geometry on molecular diffusion in the bulk and *δ* is a parameter that characterises the shape of the pore channels and *v* is the average pore velocity. The second term on the right-hand side of Eq. (5.18) is termed mechanical dispersion. It quantifies solute spreading due to the tortuous streamlines and velocity variability of the pore-scale flow field. Bear [22] proposes to use expression Eq. (5.18) also for the transverse dispersion coefficients *DT* . Experimental data suggest that *DT /D* <sup>∼</sup> Pe0*.*<sup>95</sup> for 1 *<sup>&</sup>lt;* Pe *<sup>&</sup>lt;* Pe*<sup>c</sup>* [28].

### **Macro-Dispersion**

For transport upscaling from the Darcy to the regional scale, stochastic perturbation theory gives for macro-scale transport the advection–dispersion equation [20]

$$
\phi \frac{\partial c(\mathbf{x}, t)}{\partial t} + \langle q \rangle \frac{\partial c(\mathbf{x}, t)}{\partial x} - \nabla \cdot \left[ \mathbf{D}^\* \nabla c(\mathbf{x}, t) \right] = 0,\tag{5.19}
$$

where the *x*-axis of the coordinate system is aligned with the mean hydraulic gradient. The Péclet number here is defined as Pe = *q/DL*. For a statistically isotropic heterogeneous porous medium, the macro-dispersion tensor **D**∗ is diagonal. For Pe 1, the longitudinal macro-dispersion coefficient *DL* is given by

$$D\_L^\* = \sigma\_{ff}^2 \ell K\_G \langle \nabla P(\mathbf{x}) \rangle + \dots,\tag{5.20}$$

where the dots denote contributions of the order of *DL* and of order *σ*<sup>2</sup> *ff* . This remarkable result relates the macroscopic dispersion effect due to local scale velocity fluctuations to the statistical medium properties in terms of the variance *σ*<sup>2</sup> *ff* and correlation length of log-permeability and the geometric mean permeability. Perturbation theory in *σ*<sup>2</sup> *ff* predicts that *D*<sup>∗</sup> *<sup>T</sup>* is of the order of *DT* , this means local scale dispersion. While this is exact for two dimensions [29], observations and numerical simulations suggest that it is not valid for three dimensions [30, 31]. In fact, numerical results suggest that *D*∗ *<sup>T</sup>* <sup>∝</sup> *<sup>σ</sup>*<sup>4</sup> *ff* in the advection-dominated limit of Pe → ∞. The reader is referred to the textbooks by [11–13] for a thorough account of the macro-dispersion approach and stochastic perturbation theory for macro-scale transport in heterogeneous porous media.

# *5.3.2 Anomalous Dispersion*

Fickian dispersion predicts that the first and second centred moments of a solute plume increase linearly with time that the solute distribution is Gaussian shaped, and solute breakthrough can be described inverse Gaussian distributions. Furthermore, within the Fickian dispersion paradigm, mixing is fully characterised by the constant dispersion coefficients. For heterogeneous porous media, and heterogeneous media in general, however, transport does not generally follow Fickian dynamics. Breakthrough curves are characterised by strong tailing, dispersion evolves in general non-linearly in time and spatial plumes do not show Gaussian shapes and are in general characterised by forward or backward tails. Such behaviours are closely related to the notion of incomplete mixing on the support scale. If the support scale is not fully mixed, for example, due to mass transfer between sub-scale mobile and immobile regions, or velocity variability, transport dynamics are history-dependent. Non-Fickian and anomalous transport behaviours have been observed both on the pore [32–34] and Darcy scales [35, 36].

The mechanism that mixes the support scale is diffusion (pore-scale) and hydrodynamic dispersion (Darcy scale). Note that ultimately the mechanism that attenuates concentration contrasts is diffusion. Mechanical dispersion quantifies the spread of a solute distribution due to advective heterogeneity and the formation of filaments, which facilitates the action of diffusion to homogenise concentration, and is discussed further in Sect. 5.3.3. Thus, for high Péclet numbers, the characteristic mixing time scales over the support scale may be significantly larger than the time scale of interest. The prediction of transport in heterogeneous media requires approaches that allow to quantify non-Fickian transport dynamics.

The moment equations and projector formalism approaches [37, 38] are obtained from the stochastic averaging of the local scale heterogeneous transport problem, Eq. (5.12), which yields space- and time-non-local integro-differential equations, whose memory kernels are related to the heterogeneity statistic. Closed-form expressions for the memory kernels are in general difficult to obtain. Fractional advection–dispersion equations [39, 40] are characterised by spatio-temporal kernel function with an asymptotic power-law scaling. This approach can be related to continuous time random walks and Levy flights [41]. In the following, we provide a summary of the continuous time random walk (CTRW) and multi-rate mass transfer (MRMT) frameworks to describe anomalous dispersion in porous media. The CTRW [42–44], and related time domain random walk (TDRW) [45, 46] frameworks, as well as the MRMT approach [47, 48] have been used for transport upscaling in highly heterogeneous porous and fractured media. These approaches account for the heterogeneity-induced distribution of advective and diffusive mass transfer rates and residence times.

### **Continuous Time Random Walks**

The continuous time random walk (CTRW) [49, 50] models particle motion as a random walk in space and time. The concentration distribution, or equivalently, the particle density *c(x, t)* is given by

$$c(\mathbf{x}, t) = \int\_0^t dt' R(\mathbf{x}, t') \int\_{t - \mathbf{t'}}^{\infty} dt'' \psi(t''), \qquad \psi(t) = \int\_{-\infty}^{\infty} dx \psi(\mathbf{x}, t), \tag{5.21}$$

where *R(x, t)dxdt* is the average number of times a particle is in [*x, x*+*dx*]×[*t,t*+ *dt*]; *ψ(x, t)* is the joint PDF of transition length and time. Thus, the right-hand side of Eq. (5.21) denotes the frequency by which a particle arrives at a position *x* at time *t* times the probability that it stays (waits) there for a time smaller that *t*. *R(x, t)* satisfies the Chapman–Kolmogorov type equation

$$R(\mathbf{x},t) = c(\mathbf{x},t)\delta(t) + \int\_{-\infty}^{\infty} d\mathbf{x}' \int\_{0}^{t} dt' \psi(\mathbf{x} - \mathbf{x}', t - t')R(\mathbf{x}', t'). \tag{5.22}$$

The first term on the right-hand side denotes the initial particle distribution at time *t* = 0. Combining Eqs. (5.21) and (5.22) gives the generalised master equation [51]

$$\frac{\partial c(\mathbf{x},t)}{\partial t} = \int\_{-\infty}^{\infty} d\mathbf{x}' \int\_{0}^{t} dt' \mathcal{K}(\mathbf{x} - \mathbf{x}', t - t') \left[ c(\mathbf{x}', t) - c(\mathbf{x}, t) \right],\tag{5.23}$$

where the memory kernel K*(x, t)* is defined by its Laplace transform [52] as

$$\mathcal{K}^\*(\mathbf{x}, \lambda) = \frac{\lambda \psi^\*(\mathbf{x}, \lambda)}{1 - \psi^\*(\lambda)}. \tag{5.24}$$

Laplace transformed quantities are denoted by an asterisk, the Laplace variable is denoted by *λ*. For short-ranged spatial transitions, Eq. (5.23) can be localised in space such that

### 5 Upscaling Flow and Transport Processes 149

$$\frac{\partial c(\mathbf{x},t)}{\partial t} + \int\_0^t dt' \left[\kappa\_1(t-t')\frac{\partial}{\partial \mathbf{x}} - \kappa\_2(t-t')\frac{\partial^2}{\partial \mathbf{x}^2}\right] c(\mathbf{x},t) = 0,\tag{5.25}$$

where the advection and dispersion kernels are defined by

$$\kappa\_1(t) = \int\_{-\infty}^{\infty} dx x \mathcal{K}(x, t), \qquad \kappa\_2(t) = \frac{1}{2} \int\_{-\infty}^{\infty} dx x^2 \mathcal{K}(x, t), \tag{5.26}$$

The fluctuating micro-scale transport dynamics are encoded in the joint PDF *ψ(x, t)*. For purely advective solute transport, for example, the transition length is of the order of the correlation scale *c* of the velocity magnitude *v*, and the transition time is given kinematically by *c/v*. The distribution *ps(v)* of the particle speed sampled equidistantly along a streamline is related to the Eulerian velocity PDF by flux-weighting as [53]

$$p\_{\mathcal{S}}(v) = \frac{v p\_{\mathcal{e}}(v)}{\langle v\_{\mathcal{e}} \rangle}. \tag{5.27}$$

Thus, the joint PDF of transition length and time is

$$
\psi(\mathbf{x},t) = \delta(\mathbf{x} - \ell\_c) \frac{\ell\_c^2 p\_\ell(\ell\_c/t)}{t^3 \langle v\_\ell \rangle}. \tag{5.28}
$$

For transport at an average velocity *v*<sup>0</sup> over the characteristic length <sup>0</sup> combined with mass transfer into immobile zones, the transition time distribution is given in Laplace space by Margolin et al. [54]

$$\psi^\*(\lambda) = \exp\left(\lambda\tau\_0 + \gamma\_l\tau\_0 \left[1 - p\_f^\*(\lambda)\right]\right),\tag{5.29}$$

where *τ*<sup>0</sup> = 0*/v* is the advective transition time, *γt* the trapping rate and *pf (t)* the distribution of residence times in the immobile regions.

We briefly summarise the transport characteristics for an uncoupled CTRW, this means *ψ(x, t)* = *(x)ψ(t)*, characterised by a power-law long-time scaling of the transition time distribution as *ψ(t)* <sup>∼</sup> *<sup>t</sup>*−1−*<sup>β</sup>* with 0 *<β<* 2. Such heavy tailed transition distributions imply strong particle retention and thus memory effects. The power-law in *ψ(t)* directly relates to the solute breakthrough curves. Note that the breakthrough time at a control plane is the sum of *n* transition times *τi*, where *n* may be approximated by the average number of spatial steps needed to arrive at the control plane. Thus, the generalised central limit theorem implies that the breakthrough curve scales as *f (t, x)* <sup>∼</sup> *<sup>t</sup>*−1−*β*. The first and second centred moments of the solute distribution scale asymptotically as *m(t)* <sup>∼</sup> *<sup>t</sup><sup>β</sup>* and *<sup>κ</sup>*2*(t)* <sup>∼</sup> *<sup>t</sup>*2*<sup>β</sup>* for 0 *<β<* 1 and as *m(t)* <sup>∼</sup> *<sup>t</sup>* and *<sup>κ</sup>*2*(t)* <sup>∼</sup> *<sup>t</sup>*3−*β*.

For advective transport upscaling, the CTRW framework has been used together with Markov models for series of velocity magnitudes along streamlines [55–57], which allow for the evolution of the transition time distribution with increasing step number and for the conditioning of transport on initial particle velocities [53]. The CTRW framework has been employed for transport modelling in a wide range of fluctuating environments ranging from the diffusion of charge carriers in impure semi-conductors [50] to diffusion in living cells [58], see also [41, 59].

### **Multi-Rate Mass Transfer**

The multi-rate mass transfer (MRMT) approach [47, 48] separates the support scale into a mobile continuum and a suite of immobile continua, which communicate through linear mass transfer. At each point, the immobile concentration *cim(x, t)* is related to the mobile concentration *cm(x, t)* through the linear relation [48]

$$c\_{lm}(\mathbf{x},t) = \int\_0^t dt' \varphi(t-t')c\_m(\mathbf{x},t'). \tag{5.30}$$

The evolution of the mobile concentration is given by the advection–dispersion equation [47]

$$\phi\_m \frac{\partial c\_m(\mathbf{x}, t)}{\partial t} + q \frac{\partial}{\partial \mathbf{x}} c\_m(\mathbf{x}, t) - D\phi\_m \frac{\partial^2}{\partial \mathbf{x}^2} c\_m(\mathbf{x}, t) = -\phi\_{lm} \frac{\partial c\_{lm}(\mathbf{x}, t)}{\partial t}, \qquad (5.31)$$

where *φim* and *φm* are the immobile and mobile volume fractions. The memory function *ϕ(t)* encodes the mass transfer mechanisms between the mobile and immobile continua. For diffusive mass transfer into slab shaped immobile regions, the memory function is defined by its Laplace transform as [48, 60]

$$
\varphi^\*(\lambda) = \frac{\tanh(\sqrt{\lambda \tau\_D})}{\sqrt{\lambda \tau\_D}},
\tag{5.32}
$$

where *τD* is the characteristic diffusion time across the slab. For spherical inclusions, the memory function is

$$\varphi^\*(\lambda) = \frac{3}{\sqrt{\lambda \tau\_D}} \left[ \coth(\sqrt{\lambda \tau\_D}) - \frac{1}{\sqrt{\lambda \tau\_D}} \right]. \tag{5.33}$$

For purely diffusive mass transfer the MRMT approach is equivalent to transport under matrix diffusion [61], which describes transport in fractured media under diffusive mass transfer between the fracture and the rock matrix. In general for diffusive mass transfer, the memory function is obtained from the solution of a diffusion problem in a heterogeneous immobile domain [62].

For first-order mass transfer at a single rate *ω* the memory is given by

$$
\varphi(t) = \omega \exp(-\alpha t). \tag{5.34}
$$

Oftentimes, the mass transfer processes and the geometries of immobile regions are not known in detail. The memory has then been modelled by a superposition of multiple first-order memory functions as [35, 47, 48]

$$\varphi(t) = \int\_0^\infty d\omega \mathcal{P}(\omega) \omega \exp(-\omega t),\tag{5.35}$$

where P*(ω)* is the rate distribution, which may be related to the volume fractions of the immobile zones, for example. Other approaches use parametric forms for the memory function, such as truncated power-laws [63].

In this framework, the behaviour of solute breakthrough at asymptotic times follows the time derivative of the memory function

$$f(t, \mathbf{x}) = -\frac{\mathbf{x}}{q} \frac{\phi\_m}{\phi\_{lm}} \frac{d\varphi(t)}{dt}. \tag{5.36}$$

This means, for a memory function, which asymptotically behaves as a power-law *ϕ(t)* <sup>∼</sup> *<sup>t</sup>*−*β*, the breakthrough curve scales as *f (t, x)* <sup>∼</sup> *<sup>t</sup>*−1−*<sup>β</sup>* [35, 64]. For matrix diffusion into a semi-infinite slab, the memory function scales as *ϕ(t)* <sup>∼</sup> *<sup>t</sup>*−1*/*<sup>2</sup> and consequently the breakthrough curve as *f (t, x)* <sup>∼</sup> *<sup>t</sup>*−3*/*2, which is a signature of matrix diffusion.

Both the CTRW and MRMT frameworks share similar phenomenology in that they account for memory effects due to a distribution of characteristic mass transfer time scales. The correspondence between the two pictures was discussed in [64– 67]. The time behaviour of the spatial moments of the concentration distribution is similar to the ones described by an uncoupled CTRW [64].

# *5.3.3 Mixing and Chemical Reactions*

In this section, we are concerned with mixing and reactions in heterogeneous porous media. As chemical reactions are contact processes, mixing and dispersion are key processes for the sound quantification of chemical reactions in heterogeneous media. This refers both to homogeneous, i.e., fluid–fluid, reactions, and to heterogeneous, i.e., fluid–solid, reactions, as outlined in the following. We will first discuss the notions of mixing and dispersion, and specifically the difference between these two processes. Then, we discuss chemical reactions under spatial heterogeneity.

### **Mixing, Diffusion and Dispersion**

In Fickian transport descriptions, the process that leads to the mixing of initially segregated solutes or the mixing of an initially concentrated solute into the ambient fluid is mass transfer due to diffusion or dispersion. From expression Eq. (5.16) we obtain directly that the maximum concentration *cm(t)* in one dimension decays as *cm(t)* = 1*/* <sup>√</sup>4*πDt*. In *<sup>d</sup>* spatial dimensions one finds that *cm(t)* <sup>=</sup> <sup>1</sup>*/(*4*πDt)d/*2. Mixing due to molecular diffusion on mesoscopic length scales *L* is in general slow. The characteristic mixing time is given by *τm* <sup>=</sup> *<sup>L</sup>*2*/D*. For a free fluid, stirring or chaotic flow accelerates the mixing process in that it generates laminar structures [68] whose size *l(t)* increases exponentially fast with time, *l(t)* = *l*<sup>0</sup> exp*(λt)*, where *λ* here is the Lyapunov exponent. The width of the lamellar structures is limited by stretching and diffusion to the Batchelor scale *sB* <sup>=</sup> <sup>√</sup>*D/λ* [69]. The number of lamellae in a closed area of size *<sup>A</sup>* <sup>=</sup> *<sup>L</sup>*<sup>2</sup> increases exponentially fast as *n(t)* ∼ *(t)/L*, while each lamella occupies an area of *A* ∼ *sBL*. Complete mixing is achieved when *n(t)A* <sup>∼</sup> *<sup>L</sup>*2. This gives a mixing time *τm* <sup>=</sup> *<sup>λ</sup>*−<sup>1</sup> ln*(L*2*/*0*sB)* which is in general much shorter than the mixing time by diffusion only.

### **Mixing and Spreading in Porous Media**

Here we are concerned with mixing in flows through heterogeneous porous media. We have seen above that solute transport has been quantified in terms of hydrodynamic dispersion (pore to Darcy scale) and macro-dispersion (Darcy to regional scale), which simulates that the support scale is well-mixed. This Fickian paradigm, however, breaks down for transport in heterogeneous media, for which anomalous or non-Fickian transport behaviours are observed. These involve history-dependence, which implies that the support scale cannot be considered wellmixed. Unlike for stirring in a free fluid, for porous media flows, the "stirring" is done by the medium itself, whose structure leads to tortuous path-lines and velocity heterogeneity. Chaotic flow patterns are in general prohibited topologically for steady two-dimensional flows. In three dimensions, steady pore-scale flow is chaotic [70], which may lead to similar mixing as in chaotic flow in a free fluid [71]. The existence of low velocity regions, stagnant zones in the wake of solute grains, velocity variability between pores and intra-grain mass transfer, however, leads to incomplete mixing on the REV scale and thus history-dependent transport [72, 73].

The topological properties of Darcy-scale flow through porous media prohibit chaotic flow [74], see also Sect. 5.2.3. Thus, while the action of the spatially variable flow velocity leads to the creation of lamellar structures [75], their lengths cannot increase exponentially fast [76]. In fact, the extension of a solute distribution due to such advection-induced spreading can be measured by the concept of macrodispersion. When lamellae form along the mean flow direction, the separation distance between them is given by the characteristic transverse heterogeneity length scale . Thus, the time scale to mix the heterogeneous concentration distribution is given by the time for transverse dispersion over the distance , *τD* <sup>=</sup> <sup>2</sup>*/DT* . These mechanisms are accounted for by effective dispersion coefficients [77, 78]. Unlike macro-dispersion, the concept of effective dispersion does not account for the effect of purely advective spreading [79, 80], but quantifies the combined effect of local scale dispersion and advective heterogeneity, which eventually leads to mixing.

### **Scalar Dissipation and Concentration Statistics**

In order to illustrate the relative role of local scale dispersion and spreading as quantified by macro-dispersion in solute mixing, we consider the evolution of the variance of the concentration fluctuation about its ensemble mean value, *c (***x***,t)* = *c(***x***,t)* − *c(***x***,t)*, defined here by

$$
\sigma\_c^2(t) = \int d\mathbf{x} \langle c'(\mathbf{x}, t)^2 \rangle. \tag{5.37}
$$

From the Darcy-scale advection–dispersion Eq. (5.12) one can derive [81]

$$\begin{split} \frac{d\sigma\_c^2(t)}{dt} &= -\frac{2}{\phi} \int d\mathbf{x} \langle \nabla \tilde{c}(\mathbf{x}, t) \cdot \mathbf{D} \nabla \tilde{c}(\mathbf{x}, t) \rangle + \\ &\quad + \frac{2}{\phi} \int d\mathbf{x} \nabla \langle c(\mathbf{x}, t) \rangle \cdot \mathbf{D}^\* \nabla \langle c(\mathbf{x}, t) \rangle. \end{split} \tag{5.38}$$

The first term on the right-hand side is denoted as scalar dissipation rate. It quantifies the destruction of concentration variance due to local dispersive mass transfer. The second term on the right-hand side quantifies the creation of concentration variance due to spreading as quantified by macro-dispersion.

The mixing process can also be described in terms of the evolution of the PDF of concentration values *c(***x***,t)* in the heterogeneous mixture, which can be defined by

$$p(c; \mathbf{x}, t) = \langle \delta[c - c(\mathbf{x}, t)] \rangle. \tag{5.39}$$

Based on the advection–dispersion Eq. (5.12) one can derive the following evolution equation for the PDF [82]:

$$\begin{split} \phi \frac{\partial p(c; \mathbf{x}, t)}{\partial t} + \langle \mathbf{q} \rangle \cdot \nabla p(c; \mathbf{x}, t) - \nabla \cdot \mathbf{D}^\* \nabla p(c; \mathbf{x}, t) &= \\ &= -\frac{\partial}{\partial c} \langle \nabla \cdot \mathbf{D} \nabla c(\mathbf{x}, t) | c \rangle p(c; \mathbf{x}, t) .\end{split} \tag{5.40}$$

The term on the right-hand side is the average over the local dispersive flux terms conditional to concentration, which represents a closure problem. The celebrated interaction by exchange with the mean (IEM) closure [83] approximates this expression as

$$
\langle \nabla \cdot \mathbf{D} \nabla c(\mathbf{x}, t) | c \rangle = \frac{\mathcal{Y} IEM}{2} \left( c - \langle c(\mathbf{x}, t) \rangle \right), \tag{5.41}
$$

where *γIEM* is a rate constant that may be related to local scale dispersion and the local dissipation scales. This closure implies for the scalar dissipation rate

$$\frac{2}{\phi} \int d\mathbf{x} \langle \nabla c'(\mathbf{x}, t) \cdot \mathbf{D} \nabla c'(\mathbf{x}, t) \rangle = \gamma\_{IEM} \sigma\_c^2(t). \tag{5.42}$$

This closure approximation has been applied to predict mixing in heterogeneous porous media, but is not able to match the numerically observed evolution of the scalar dissipation rate [84]. The IEM closure has several shortcomings for porous media mixing. Firstly it implicitly assumes that the concentration PDF is Gaussian shaped or approximately Gaussian, while in porous media they are typically highly non-Gaussian [75]. Secondly, it assumes a constant local mixing scale, while the mixing scale in porous media evolves with time [85]. Alternative approaches employ parametric forms for the concentration PDF, such as beta-distributions, which can be parameterised by the concentration mean and variance [86], mapping approaches [87] and stochastic mixing models for the evolution of concentration [88].

### **Lamellar Mixing**

Recently, the problem of mixing in porous media has been addressed using a lamellar mixing approach [75, 89]. The mixing process can be roughly separated in two regimes. In an early time regime, the initial solute distribution spreads out and advective heterogeneity generates a lamellar organisation of the concentration field. In the late time regime, the lamellar organisation is destroyed due to coalescence of adjacent lamellae. In both regimes, the concentration PDF can be constructed in terms of the concentration contents of individual lamellae and their interactions.

In the early time regime, lamellae are non-interacting and the evolution of the concentration content of the mixture can be understood by the superposition of the concentration contents of isolated lamellae, which is fully determined by fluid stretching and local scale dispersion [69, 90]. The concentration across a stretched lamella is Gaussian shaped and given by [75]

$$c(z,t) = \frac{c\_0 \exp\left(\frac{z^2 \ell(t)^2}{s\_0^2[1+4\theta(t)]}\right)}{\sqrt{1+4\theta(t)}}, \qquad \theta(t) = \frac{D}{\phi s\_0^2} \int\_0^t dt' \ell(t')^2,\tag{5.43}$$

where *z* is the coordinate across the lamella, *(t)* = *l(t)/l(*0*)* the relative strip elongation and *s*<sup>0</sup> the initial strip width. Note that the concentration here depends on the elongation *(t)*. The PDF of concentration values across a strip is then given by

$$p(c|c\_m) = \frac{1}{2c\sqrt{\ln(c\_m/\epsilon)\ln(c\_m/c)}},\tag{5.44}$$

where is a lower concentration cut-off and *cm(t)* = *c(z* = 0*,t)* is the maximum strip concentration, which again depends on elongation *(t)*. For heterogeneous media, the strip elongation *(t)* is the result of the random deformations a strip experiences as it is transported through the medium. For heterogeneous porous media, elongation is dominated by intermittent shear events along a trajectory [76]. The mean elongation may follow power-law behaviours *(t)* ∼ *<sup>t</sup><sup>α</sup>* with 1*/*<sup>2</sup> *<α<* 2. The maximum concentration can be approximated by *cm(t)* <sup>≈</sup> <sup>1</sup>*/(t)*√*Dt* [75], because it decays at the same rate as the area of the lamella increases. Thus, the PDF of elongation can be mapped onto the PDF *pm(cm,t)* of maximum concentrations, and the global concentration PDF is obtained through superposition of the local laminar PDFs as

$$p(c,t) = \int dc\_m p\_m(c\_m, t) \, p(c|c\_m). \tag{5.45}$$

The decisive step here is to recognise that the concentration field at early times is organised in a lamellar structure and that the concentration content of a lamella depends explicitly on the strip elongation. This allows obtaining the concentration PDF by mapping from the PDF of strip elongations.

With increasing time, the length of the lamellae becomes larger than the mixing support, which increases slower than the lamella elongation (<sup>∼</sup> <sup>√</sup>*<sup>t</sup>* for dispersive growth), or is constant in the case of a confined domain. Thus, the lamellae need to fold back to each other, which in the late time regime leads to diffusive overlap and the formation of lamella aggregates through a random aggregation process. The PDF of maximum concentrations of lamella aggregates after *n* aggregations is given by the gamma distribution [69]

$$p\_m(c\_m, t) = \frac{\left(c\_m/\langle c\_m(t)\rangle\right)^{n-1} \exp\left(-c\_m/\langle c\_m(t)\rangle\right)}{\Gamma(n)\langle c\_m(t)\rangle}.\tag{5.46}$$

In the centre of the plume, the number *n(t)* of lamellae in the aggregate is related to the average maximum concentration as *n(t)cm(t)* ∼ 1*/* <sup>√</sup>*κ*∗*(t)*, where *<sup>κ</sup>*∗*(t)* is the spatial variance of the average solute concentration, which can be described by macro-dispersion. This sets the concentration PDF at the plume centre as a result of random aggregation of lamellae. The evolution equation for the PDF *p(c, t)* of the concentration content in the mixture as a result of random aggregation is discussed in [69].

### **Chemical Reactions**

In this section, we will briefly discuss the upscaling of chemical reactions in heterogeneous porous media and the influence of mixing on the reaction efficiency. Chemical reactions are contact processes and thus depend on the availability of reacting species and on the mechanisms that bring them into contact. In a wellmixed reactor, in which stirring-induced mixing is exponential as described above, mass transfer is not a limiting process. For a heterogeneous porous medium, in which mixing is much slower, reactions may be limited by the transport rate, i.e., the efficiency by which they are brought into contact. In the former case, the chemical reaction is rate limited, in the latter transport limited. These situations are distinguished by the Damköhler number

$$\mathbf{D}\mathbf{a} = k\_r \mathbf{\tau}\_m,\tag{5.47}$$

where *kr* is a reaction rate and *τm* a characteristic mass transfer time scale. For Da *<* 1, the chemical reaction is rate limited, for Da 1 it is mixing, or transport limited.

### **Incomplete Mixing**

A key issue for reaction upscaling in porous media is the notion of a well-mixed support scale. We have seen in the previous sections that mixing in porous media is slow because the "stirring" by the porous medium is much less efficient than stirring-induced chaotic advection in a free fluid. Spatial heterogeneity and consequently slow mass transfer between different compartments of a heterogeneous porous medium leads to reactant segregation and thus to a reduction of the reaction rate compared to a well-mixed system. We have seen in Sect. 5.3.2 that incomplete mixing on the support scale leads to non-Fickian transport and history-dependent transport phenomena. The same mechanisms lead to reaction behaviours that are different from the ones measured in well-mixed laboratory environments [91, 92]. However, traditional Darcy-scale reactive transport modelling is based on the advection–dispersion equation for the species concentration *ci* combined with a kinetic rate law determined for a well-mixed environment,

$$\phi \frac{\partial c\_l(\mathbf{x}, t)}{\partial t} + \mathbf{q} \cdot \nabla c(\mathbf{x}, t) - \nabla \cdot \left[\mathbf{D} \nabla c(\mathbf{x}, t)\right] = \sum\_{j} \nu\_{lj} r\_j [\mathbf{c}(\mathbf{x}, t)],\tag{5.48}$$

where *νij* are stoichiometric coefficient, *rj* is the reaction rate of the *j* th reaction and **c***(***x***,t)* is the vector of concentrations of the reacting species. This formulation assumes that the support scale is a well-mixed environment. This means that the time scale by which the concentration on the support scale homogenises after a concentration perturbation due to mass transfer is smaller than the reaction time scale. Only under these conditions can the reaction rate on the right-hand side be identified with the one obtained in a well-mixed environment. The upscaling of reactive transport from the pore to the Darcy scale can be formally studied, for example, using volume-averaging [93, 94] or homogenisation theory [95]. The validity of Darcy-scale reaction–dispersion models such as Eq. (5.48) have been investigated in detail by Kechagia et al. [96] and Meile and Tuncay [97]. These studies systematically show discrepancies between the average reaction rates and reaction rates predicted by the advection–dispersion reaction equation. Similar observations have been made for the upscaling from Darcy to regional scale [98–101]. Incomplete mixing on the support scale leads in general to the reduction of the mixing efficiency compared to equivalent Fickian large scale models. The segregation of reactants on the support scale can be addressed using multi-continuum approaches [102, 103], which resolve concentration variability on the support scale due to subregions of slow and fast mass transfer.

An illustrative example for the impact of chemical heterogeneity on reactivity is diffusion in a medium characterised by a spatial distribution of specific reactive surfaces *σr(***x***)* at which species *A* reacts to *C*. The concentration *cA* of *A* evolves according to the reaction–diffusion equation

$$\frac{\partial c\_A(\mathbf{x},t)}{\partial t} - D\nabla^2 c\_A(\mathbf{x},t) = -k\sigma\_r(\mathbf{x})c\_A(\mathbf{x},t). \tag{5.49}$$

We consider *σr(***x***)* = 0*,* 1 randomly distributed in space with a characteristic distance *c* and Da <sup>=</sup> *kL*2*/D* 1, this means fast chemical reactions. Thus one would assume that the effective reaction rate is given by the mean diffusion time between reactive spots, *<sup>k</sup><sup>e</sup>* <sup>=</sup> <sup>1</sup>*/τD*, where *τD* <sup>=</sup> <sup>2</sup> *c/D* is the average diffusion time between reaction spots. However, the total mass of *A* decays at long times as a stretched exponential [104],

$$c\_A \sim \exp\left[-\beta \left(t/\tau\_D\right)^{d/(d+2)}\right],\tag{5.50}$$

with *β* being a constant. It decays slower than the exponential decay predicted by *τD*. This is due to the fact that the space between reaction spots has a finite probability to be arbitrarily large. Thus, segregation due to spatial heterogeneity leads to a slower decay than what would be predicted by mean field theory.

### **Mixing-Limited Reactions**

For mixing-limited chemical reactions, i.e., for high Da, the "stirring" action of the heterogeneous porous medium enhances the mixing efficiency compared to purely diffusive mass transfer and may lead to the formation of localised mixing and reaction hotspots [105, 106]. At high Da, the reaction rate is directly proportional to the mixing rate and the reactive transport problem can be mapped onto a conservative transport problem plus a speciation relation [107]. We illustrate this briefly for a fast reversible bimolecular reaction *A* + *B* - *C* ↓ on the Darcy scale. Chemical equilibrium is described by the mass action law

$$c\_A c\_B = K,\tag{5.51}$$

where *cA* and *cB* are the concentrations of species *A* and *B* and *K* is the equilibrium constant. The reactive transport problem is described by Eq. (5.48) for each species (*i* = *A, B*), which for a heterogeneous medium reads as

158 M. Icardi et al.

$$\phi \frac{\partial c\_l(\mathbf{x}, t)}{\partial t} + \nabla \cdot \mathbf{q}(\mathbf{x}) c\_l(\mathbf{x}, t) - \nabla \cdot \mathbf{D}(\mathbf{x}) \nabla c\_l(\mathbf{x}, t) = -r(\mathbf{x}, t), \tag{5.52}$$

where we have assumed the same dispersivity for both species, and where *r(***x***,t)* denotes the "equilibrium" reaction rate. It is determined by observing that *ξ* = *cA* − *cB* is a conserved variable (sometimes called mixture fraction) and obeys the conservative advection–dispersion equation

$$
\phi \frac{\partial \xi(\mathbf{x}, t)}{\partial t} + \nabla \cdot \mathbf{q}(\mathbf{x}) \xi(\mathbf{x}, t) - \nabla \cdot \mathbf{D}(\mathbf{x}) \nabla \xi(\mathbf{x}, t) = 0. \tag{5.53}
$$

Secondly, both *cA* and *cB* depend only on *u* through the mass action law, Eq. (5.51),

$$c\_A(u) = \frac{\xi}{2} + \sqrt{\frac{\xi^2}{4} + K},\tag{5.54}$$

and analogously for *cB*. Using this relation Eq. (5.52) together with Eq. (5.53) gives the following equation for the reaction rate:

$$r(\mathbf{x},t) = \frac{1}{\phi} \frac{d^2 c\_A(\xi)}{d\xi^2} \left[ \nabla \xi(\mathbf{x},t) \cdot \mathbf{D}(\mathbf{x}) \nabla \xi(\mathbf{x},t) \right],\tag{5.55}$$

The expression in the square brackets is identical to the scalar dissipation rate, see Eq. (5.38), which measures the rate of mixing of dissolved substances. This expression illustrates the direct dependence of the reaction rate at high *Da* with the mixing rate as quantified by the scalar dissipation rate. The impact of medium heterogeneity on the reaction rate has been investigated using the PDF of the conserved components *ξ(***x***,t)*, which can be mapped directly on the PDF of the species concentrations via Eq. (5.54) [108, 109].

# **5.4 Multiphase and Surface Processes**

In the previous sections, we presented the challenges related to the dispersion and the upscaling of bulk reactions in the fluid. These, when the Fickian assumption is not valid, can be conveniently approximated with multi-continuum models. This is not dissimilar from what is obtained in proper multiphase systems. For example, *conjugate heat transfer* problems, that would require the coupled solution of heat transfer in the fluid and solid region, can be averaged to obtain two-phase formulations with appropriate transfer terms. However, these transfer terms are local (in time and space) and linear only under local equilibrium conditions in both phases. An alternative approach is to perform the upscaling in two steps. First the phase which relaxes faster to equilibrium is approximated macroscopically with an equivalent heat transfer coefficient (possibly non-constant) at the solid boundary. This first upscaling reduces the dimensionality of the fast dynamics in a sub-domain, effectively representing it as a surface process, leaving us with the task of averaging transport in one phase only with complex boundary conditions.

Among these surface processes we can identify the following three categories:

• *Dynamic conditions:* One of the most general surface process is the one encountered in adsorption–desorption processes that can be modelled with the following boundary conditions:

$$
\mathbf{j} \cdot \mathbf{n} = (\mathbf{u} - D\_m \nabla) \ c \cdot \mathbf{n} = b \left( c, s \right),
$$

where **j** is the flux, *b* represents the adsorption/desorption/transfer processes, *r* is a source/sink describing chemical reactions and *s* = *s(***x***,t)* is the adsorbed concentration on the surface. This has been solved separately with a surface ordinary differential equation

$$\frac{\partial s}{\partial t} = -b \left( c, s \right).$$

• *Mixed conditions:* When the time scale defined by *b* is fast enough, one can explicitly find *s* = *s(c)* such that *b(c, s)* = 0. This means that the above conditions simplify to a, possibly non-linear, mixed boundary condition of the type

$$
\mathbf{j} \cdot \mathbf{n} = (\mathbf{u} - D\_m \nabla) \ c \cdot \mathbf{n} = b \begin{pmatrix} c \ s \ (c) \end{pmatrix} = f(c).
$$

Linearising *f* we obtain a mixed (Robin) boundary condition

$$
\mathbf{j} \cdot \mathbf{n} = f\_0 + f'c.\tag{5.56}
$$

• *Simple conditions:* Assuming infinitely slow adsorption/reaction process *f* , the condition above reduces to a fixed flux (Neumann) condition

$$
\mathbf{j} \cdot \mathbf{n} = f\_0,
$$

while, in the opposite case of infinitely fast surface process, we can retrieve a simple fixed concentration (Dirichlet) condition.

Clearly, the validity of these increasingly simplifying assumptions has to be verified case by case, and we consider this as part of the upscaling process. In all three cases, however, standard upscaling techniques (such as volume-averaging and homogenisation) can be applied only when the surface process is slow compared to advection and diffusion. When instead this is not the case, more advanced techniques have to be used [110, 111]. This is similar to the case of diffusion-limited bulk reactions (see above) that generally results in upscaled dispersion and velocity coefficients significantly different from the non-reactive case.

# *5.4.1 Mass and Heat Transfer*

The transport and deposition of particles in porous media are fundamental multiscale phenomena present in a number of natural and engineered processes. Although we refer here to the transport of physical particles, the discussion below conceptually applies, with minor modifications, also to heat transfer mechanisms.

The classical theoretical framework typically used is the *colloid filtration theory* but, more in general, advances in this field generally belong to the so-called *soft matter* physics. Several complex physical mechanism, in fact, can arise due to the complex particle–particle and particle–wall interaction. In this chapter, however, we focus on a simple advection–diffusion description, in the dilute limit, with negligible Stokes and Reynolds numbers. Therefore most hydrodynamical interactions (sometimes called hydrodynamic retardation effects) between the particles and the surface of the solid grains, and the DLVO interactions happen at a very small scale and are therefore taken into account only at the boundaries, by modified boundary conditions. This is known as the Smoluchowski–Levich approximation that results in the molecular diffusion coefficient *Dm* being constant, obtained, for example, via the Stokes–Einstein relation for diffusion of spheres in liquids. For larger particles instead, one should consider many other effects that act possibly also far from the wall, such as modified suspension viscosity, lift forces arising in small Reynolds number flows, the Faxen correction, due to the perturbed flow around the particles, and possibly also particle rotation, particle collisions, etc. Although some of these additional physics can easily be included in the upscaling, we limit ourselves to the effect of a surface process in the upscaling, namely in the equivalent macroscopic dispersion and reaction coefficients.

In the dimensional analysis of mass transfer phenomena, the most used dimensionless quantity is the Sherwood number, describing the ratio between convective mass transfer and diffusive transport, which is the analogue of the Nusselt number used in heat transfer. It is defined as:

$$\text{Sh} = \frac{hL}{D\_m} \,, \tag{5.57}$$

where *L* is a characteristic length (in porous media application generally taken to be equal to the pore or grain size), and *Dm* is the molecular diffusion coefficient. The mass transfer coefficient *h* is defined as the molar flux through the surface per unit surface, normalised by *C* the concentration driving force. This representation implicitly considers the mass transfer as an equivalent diffusion process through the surface. If this is more often true at the micro-scale, after the upscaling, this effective transfer coefficient scaled by the diffusion strongly depends on the other transport mechanisms, such as advection, gravity and all the other external forces.

In the earlier studies of filtration, the most common upscaling approach was to determine a single parameter describing the filter effectiveness: this is obtained from its features and the operating conditions under investigation, and is defined the *collector efficiency ηD* [112]. This efficiency coefficient is then assumed to be the product of two separated effects: the so-called attachment efficiency *α*, describing the probability of a particle that has reached the solid grain to be adsorbed, and the purely fluid-dynamical term *η*<sup>0</sup> to model the transport the bulk of the fluid to the surface of the grains. The latter is the often decomposed as a sum of different contributions due to, for example, Brownian diffusion, steric interception and inertial (and gravitational) effects. Some early works [113] analytically obtained, for Sh in idealised geometries, expressions such as

$$\mathrm{Sh} = A s^{\frac{1}{3}} \mathrm{Pe}^{\frac{1}{3}}, \quad \eta = 4 A s^{\frac{1}{3}} \mathrm{Pe}^{-\frac{2}{3}},$$

where *As* is a parameter depending on the porous medium porosity *φ*. Many other such relationships are available connecting the system features, in terms of geometrical features and fluid dynamic conditions, to an approximate particle deposition efficiency *ηD* [114]. There are a number of issues facing these models; first of all they are most often based on a single idealised geometrical model representing the porous medium, thus failing to grasp the pore-scale complexities and heterogeneity and its effect on particle filtration. Another conceptual hurdle in the application of these models is the difficult translation of the obtained efficiency parameter *ηD* into an effective macro-scale reaction term employable in a macroscopic transport equation [115], and to understand its dependence on the flow parameters and its inseparable connection with the effective dispersion and velocity.

### **From Surface Processes to Averaged Reaction Rates**

We follow here [115], showing how to obtain a stationary effective reaction rate for a periodic geometry and its dependence on the (reactive) boundary condition and flow parameters, for arbitrary reaction/deposition regimes. To this aim, we consider Eq. (5.12) and we assume that the detailed surface processes can be averaged in a small boundary layer around the solid matrix and approximated with a generic effective linearised mixed boundary condition (see Eq. (5.56)) of the type:

$$-D\_m \nabla c \cdot \mathbf{n} = -r \frac{\alpha}{\alpha - 1} c + r\_0 \quad \text{on } \Gamma,\tag{5.58}$$

where *α* is the deposition/attachment efficiency, is the porous matrix surface area, *r* is the surface transfer coefficient and *r*<sup>0</sup> is a constant surface flux. When *α* = 1, Eq. (5.58) is equivalent to a Dirichlet conditions *c* = 0 on the solid grains (perfect sink). In a particle-based Lagrangian framework, such as the one considered when performing random walk simulations, this efficiency *α* can be interpreted as related to the probability of a single colloidal particle of attaching to the collector surface upon collision [116].

Applying a simple volume-average over a fixed REV , defined as · = <sup>1</sup> *V* · d*v* (with *V* being the total volume), the divergence theorem and the boundary condition, Eq. (5.58), we obtain

162 M. Icardi et al.

$$\frac{\partial \overline{c}}{\partial t} + \frac{1}{V} \int\_{\partial \Omega} (c \,\mathbf{u} \cdot \mathbf{n} - D\_m \nabla c \cdot \mathbf{n}) \, \mathrm{d}s = -\frac{1}{V} \int\_{\Gamma} r \frac{\alpha}{\alpha - 1} c \,\mathrm{d}s + r\_0. \tag{5.59}$$

Considering a box with periodic boundary conditions on *y*− and *z*−directions, and no accumulation (stationary, local equilibrium hypothesis), it is possible to identify the second surface integral on the LHS of Eq. (5.59) with the total flux *F* through the *x*−boundaries of the domain (inlet and outlet in this case). Being the starting equation linear, it is reasonable to assume the above quantity (the average mass flux) to be a linear function of the average concentration, with a macroscopic *effective reaction rate R* defined as:

$$R = \frac{F - r\_0}{\overline{c}V}.\tag{5.60}$$

This quantity is simply computable from a micro-scale simulation on the periodic cell, simply looking at inlet–outlet fluxes and averaged volume concentration, even in the case of perfect sink (*r* → ∞) condition. Assuming a Fickian macroscopic dispersion (see sections above), we can therefore postulate a closed-form for the one-dimensional macroscopic advection–diffusion–reaction equation for *c* = *(c/c*∞*)*, in dimensionless form

$$\frac{\partial C}{\partial t\_{\rm diff}} + \varepsilon \text{Pe} \frac{\partial C}{\partial X} - \frac{\mathbf{D}}{D\_m} \frac{\partial^2 C}{\partial X^2} = -\text{Da} DC + \text{Da}\_0,\tag{5.61}$$

where *X* represents the (dimensionless) macroscopic space variable and we have defined *<sup>t</sup>*diff <sup>=</sup> *tDm <sup>L</sup>*<sup>2</sup> , and the Damköhler numbers as Da <sup>=</sup> *Rs <sup>L</sup>*<sup>2</sup> *Dm* , and Da0 <sup>=</sup> *<sup>r</sup>*<sup>0</sup> *<sup>L</sup> <sup>q</sup>* .

For a periodic FCC packing [115] we obtain the following qualitative upscaling law for the long-time effective macroscopic reaction rate, as a function of the microscopic Damköhler number Da*<sup>m</sup>* <sup>=</sup> *rL*<sup>2</sup> *<sup>D</sup>* :

$$\mathrm{Da} = \begin{cases} K\_1(\mathrm{Da}\_m) & \text{for} \quad \mathrm{Pe} \lesssim 10 \\ K\_2(\mathrm{Da}\_m) \mathrm{Pe}^{0.15} & \text{for} \quad \mathrm{Pe} \gtrsim 10, \mathrm{Da}\_m \gtrsim 1 \\ K\_3(\mathrm{Da}\_m) & \text{for} \quad \mathrm{Da}\_m \lesssim 1 \end{cases}$$

with constants *Ki*. This qualitative behaviour is universal, although the exponent 0*.*15 and the constants could possibly depend on the specific geometry. The dependence of Da with respect to Da*m*, on the other hand, is linear (independent of Pe) for slow surface processes while, for infinitely fast processes, it saturates to a constant (that depends on Pe). This is the typical behaviour for reactions happening on a localised lower-dimensional manifold where mixing can totally control the reaction.

# **5.5 Conclusions**

In the previous sections we highlighted the most important physical models and macroscopic equations that can be relevant in porous and heterogeneous materials. We identified several assumptions and limitations of the upscaling processes.

# **Non-Equilibrium and Lack of Scale Separation**

When no upscaling is available that can decouple the physical scales or when transition out-of-equilibrium effects are important at the micro-scale, alternative techniques can be used such as numerical multiscale approaches such as multiscale FEM, variational, heterogeneous or hybrid multiscale methods. These methods allow for a generic system to be solved efficiently explicitly accounting for some micro-scale information. They usually rely on a pre-processing offline step (similarly to the cell-problems in classical upscaling) or on an online dual-resolution computational approach.

# **Suspensions and Interfacial Flows**

Multiphase flows such as suspensions and interfacial flows can be upscaled with the approaches described above, only under local equilibrium, and when the forces acting on each phase are relatively small. The presence of complex momentum transfer (in the case of suspensions) and strong localised forces, such as surface tension (in the case of interfacial flows), makes the standard upscaling inadequate. An intuitive interpretation of this inadequacy is the fact that volumetric/ensemble averages cannot properly represent interfacial forces and configuration-dependent forces. Time-dependent, non-linear, non-local and memory effects can arise at the macro-scale.

# **Appendix A: Homogenisation and Two-Scale Expansions**

In this section we briefly sketch the main ideas and steps in the formal derivation of macroscopic equations using the method of periodic homogenisation with twoscale asymptotic expansion. The method has been initially proposed in theoretical mechanics for the study of composite materials and subsequently extensively studied in mathematical analysis, for elliptic (diffusion) operators and variational problems [117–121]. Quite interestingly, the first results in the area were developed in parallel both for periodic and random (stationary ergodic) media. Being the underlying mathematical techniques significantly different, the latter is also called stochastic homogenisation and has lately seen important developments [122–124], making it, in some cases, a more realistic and conceptually deeper alternative to the former. While the formal steps of periodic homogenisation are easily accessible with a basic background of asymptotic methods and PDEs, the mathematical proofs concerning the existence of a macroscopic limit (and the convergence towards it, [125]) require extensive knowledge of functional analysis and PDE theory and a much more complex notation, clearly beyond the scope of this chapter. For porous media applications we refer to [1] for a comprehensive, yet accessible, treatise.

We can identify the following steps that are usually needed to perform the twoscale expansion and derive a homogenised equation:


$$\mathbf{x} = \mathbf{x}\_0 + \epsilon \mathbf{x}\_l \tag{5.62a}$$

$$c(\mathbf{x}) \approx c\_0(\mathbf{x}) + \epsilon c\_1(\mathbf{x}) + \epsilon^2 c\_2(\mathbf{x}) = c\_0(\mathbf{x}\_0, \mathbf{x}\_1) + \epsilon c\_1(\mathbf{x}\_0, \mathbf{x}\_1) + \epsilon^2 c\_2(\mathbf{x}\_0, \mathbf{x}\_1) \tag{5.62b}$$

$$\nabla c = \left(\nabla\_{\mathbf{x}\_0} + \frac{1}{\epsilon} \nabla\_{\mathbf{x}\_l}\right) c \approx \frac{1}{\epsilon} \nabla\_{\mathbf{x}\_l} c\_0 + \left(\nabla\_{\mathbf{x}\_0} c\_0 + \nabla\_{\mathbf{x}\_l} c\_1\right) + \epsilon \left(\nabla\_{\mathbf{x}\_0} c\_1 + \nabla\_{\mathbf{x}\_l} c\_2\right),\tag{5.62c}$$

where **x**<sup>0</sup> ∈ ∈ [0*, L*] *<sup>d</sup>* defines the macro-scale and **x**<sup>1</sup> the (periodic with period [0*,* ] *<sup>d</sup>* ) micro-scale. While this expansion is formally always valid, its usefulness, as we will further explain below, relies on a wide *separation of scales* in the domain which is not always satisfied in real porous media.

4. Starting from the lowest power of , hierarchically define and solve (when a solution is trivially found) the cascade of equations for *c*0*, c*1*,...* . Usually it is enough to solve for the first two terms<sup>2</sup> to get a closed-form and computable parameters of the macroscopic limit (*c*0). However, to achieve that, additional assumptions have to be made to close the system of equations. In particular, a simplified dependence of *c*<sup>1</sup> on the macro-scale *c*<sup>0</sup> has to be assumed (when it is not obtained formally), guessing a separable structure of the type:

<sup>2</sup>With a total of three equations for three lowest powers of , in the case second-order PDEs.

### 5 Upscaling Flow and Transport Processes 165

$$c\_1(\mathbf{x}\_0, \mathbf{x}\_1) = \mathbf{w} \cdot \nabla\_{\mathbf{x}\_0} c\_0 + \overline{c\_1},\tag{5.63}$$

where the vectorial field **w** = **w***(***x**1*)*, periodic and with zero average, **w** = **0**, is also called *corrector* and can be found by solving the so-called *cell-problem*, usually derived, after some manipulation, from the first correction equation for *c*1. This decomposition *c*<sup>1</sup> well represents the scale-separation hypothesis.

5. Depending on the problem, also an average on *c*<sup>0</sup> (or, when *c*<sup>0</sup> is constant on the micro-scale, on *c*<sup>0</sup> + *c*1) might be defined to get rid of the micro-scale dependence, defining a new macroscopic field

$$\overline{c\_0}(\mathbf{x}\_0) = \overline{c\_0(\mathbf{x}\_0, \mathbf{x}\_1)} = \int\_{[0,\ell]^d} c\_0(\mathbf{x}\_0, \mathbf{x}\_1) d\mathbf{x}\_1$$

and a macroscopic evolution equation for *c*<sup>0</sup> (or *c*<sup>0</sup> + *c*1) can be derived from the terms of order 0.

### **Example: Reaction–Diffusion in a Perforated Domain**

It is important to notice that the steps outlined above are a *constructive formal* approach for homogenisation that is attractive (at least as a first approach) for new problems but does not cover the large number of possible problems for which a homogenised limit exists. This is much less general than, for example, variational approaches in which a *given* macroscopic limit is guessed and proven to be the correct limit. As an example, we illustrate the steps above for the simple case of pure diffusion with slow superficial reaction (or equivalently, heat transfer with a prescribed heat flux at the boundary), i.e.,

$$\nabla \cdot \left( D \nabla c(\mathbf{x}) \right) = f(\mathbf{x}) \tag{5.64}$$

with constant diffusion3 coefficient *D*, and a generic space-dependent source/sink term *f* . The equation is defined on a perforated (porous) domain ∈ [0*, L*] *<sup>d</sup>* in *d*-dimensions, with periodic microstructure (cell) ∈ [0*,* ] *<sup>d</sup>* . For simplicity we assume also periodicity in but this can be easily replaced with any other simple (macroscopic) boundary conditions. On the internal boundaries (the porous matrix) we impose a linear mixed (normal flux) condition:

$$D\nabla\_{\mathbf{n}}c = kc + r,\tag{5.65}$$

<sup>3</sup>Traditionally, homogenisation is performed on continuous domains with space-dependent oscillating diffusion coefficient. However, the case studied here, more relevant to porous media applications, can be interpreted as a limiting case in which the diffusion coefficient tends to a patch-wise constant. This is what is done practically when solving porous media problems with immersed boundaries, penalisation or diffuse domain methods.

where *k* is a superficial reaction term (or heat transfer coefficient), and *r* is a constant superficial source (or sink) term. Despite its simplicity, this includes already important applications such as linear isotherm adsorption or heat transfer. The extension to the mass or heat transfer through an interface within two domain or two phases, instead of a boundary condition, is a possible extension which is discussed in Sect. 5.4.

Inserting the expansions Eq. (5.62) into Eq. (5.64), yields

$$\nabla\_{\mathbf{x}\_{0}} \cdot \left[ D \left( \nabla\_{\mathbf{x}\_{0}} + \frac{1}{\epsilon} \nabla\_{\mathbf{x}\_{1}} \right) (c\_{0} + \epsilon c\_{1}) \right] + \dots$$

$$+ \frac{1}{\epsilon} \nabla\_{\mathbf{x}\_{1}} \cdot \left[ D \left( \nabla\_{\mathbf{x}\_{0}} + \frac{1}{\epsilon} \nabla\_{\mathbf{x}\_{1}} \right) (c\_{0} + \epsilon c\_{1}) \right] = 0$$

$$= f\_{0}(\mathbf{x}\_{0}, \mathbf{x}\_{1}) + \epsilon f\_{1}(\mathbf{x}\_{0}, \mathbf{x}\_{1}),$$

while the expansion of the boundary condition can be rewritten as

$$D\left(\mathbf{n}\cdot\nabla\_{\mathbf{x}0} + \frac{1}{\epsilon}\mathbf{n}\cdot\nabla\_{\mathbf{x}1}\right)(c\_0 + \epsilon c\_1) = kc\_0 + \epsilon kc\_1 + r.c.$$

At this point, since we have not put the equation in dimensionless form, it is important to identify the regime of interest. We will focus here on the simplest regime for which the homogenisation approach outlined above works seamlessly. This is the case when all coefficients (*D, f* ) are of the same order, namely of order one, and the boundary coefficients (*<sup>k</sup>* <sup>=</sup> *k*1*, r* <sup>=</sup> *r*1) are of order 1.

Collecting now terms with equal power of , and taking into account the assumptions on the coefficients, the only term of order −<sup>2</sup> leads to the linear homogeneous equation

$$\nabla\_{\mathbf{x}\_{\parallel}} \cdot \left( D \nabla\_{\mathbf{x}\_{\parallel}} c\_0 \right) = 0$$

that turns out to be a simple (linear homogeneous) equation for the variable *c*<sup>0</sup> at the micro-scale, i.e., in each periodic cell (since only derivatives with respect to **x**<sup>1</sup> appear). The largest terms in the boundary condition, i.e., a simple noflux (homogeneous Neumann) condition, are **n** · ∇**x**<sup>1</sup> *c*<sup>0</sup> = 0 on the internal solid boundaries, with periodic boundary conditions on the external boundaries. This equation is trivially satisfied by a constant, i.e., a function *c*<sup>0</sup> = *c*0*(***x**0*)* which is a function of **x**<sup>0</sup> only.

For the terms of order 1, we obtain the following equation:

$$
\nabla\_{\mathbf{x}\_0} \cdot \left( D \nabla\_{\mathbf{x}\_1} c\_0 \right) + \nabla\_{\mathbf{x}\_1} \cdot \left( D \nabla\_{\mathbf{x}\_0} c\_0 \right) + \nabla\_{\mathbf{x}\_1} \cdot \left( D \nabla\_{\mathbf{x}\_1} c\_1 \right) = 0.
$$

Given the conclusion above, i.e., *c*<sup>0</sup> constant at the micro-scale, the first term disappears and, replacing the assumption Eq. (5.63),<sup>4</sup> we obtain the equation

$$\nabla\_{\mathbf{x}\_1} \cdot \left( D \nabla\_{\mathbf{x}\_1} \left( \mathbf{w} \cdot \nabla\_{\mathbf{x}\_0} c\_0 \right) + D \nabla\_{\mathbf{x}\_0} c\_0 \right) = 0$$

or, since this has to hold for an arbitrary ∇**x**<sup>0</sup> *c*0, equivalently written as a vectorial *cell-problem* for the corrector **w**

$$\nabla\_{\mathbf{x}\_{\parallel}} \cdot D\left(\nabla\_{\mathbf{x}\_{\parallel}} \mathbf{w} + I\right) = \mathbf{0} \tag{5.66}$$

with *I* being the identity matrix, with boundary condition

$$D\mathbf{n} \cdot \left(\nabla\_{\mathbf{x}\_1} \left(\mathbf{w} \cdot \nabla\_{\mathbf{x}\_0} c\_0\right) + \nabla\_{\mathbf{x}\_0} c\_0\right) = D\mathbf{n} \cdot \left(\nabla\_{\mathbf{x}\_1} \mathbf{w} + I\right) \cdot \nabla\_{\mathbf{x}\_0} c\_0 = 0$$

or, equivalently, in vectorial form,

$$D\mathbf{n} \cdot \left(\nabla\_{\mathbf{x}\_{1}}\mathbf{w} + I\right) = \mathbf{0}.\tag{5.67}$$

The next scale, 0, reads

$$\nabla\_{\mathbf{x}\_{0}} \cdot \left( D\left(\nabla\_{\mathbf{x}\_{0}} c\_{0} + \nabla\_{\mathbf{x}\_{1}} c\_{1}\right) \right) + \nabla\_{\mathbf{x}\_{1}} \cdot \left( D\left(\nabla\_{\mathbf{x}\_{0}} c\_{1} + \nabla\_{\mathbf{x}\_{1}} c\_{2}\right) \right) = f\_{0}(\mathbf{x}\_{0}, \mathbf{x}\_{1}) \cdot \mathbf{x}\_{0}$$

that, by using Eq. (5.63), the conclusions obtained above and the boundary conditions

$$D\mathbf{n} \cdot \left(\nabla\_{\mathbf{x}\_0} c\_1 + \nabla\_{\mathbf{x}\_1} c\_2\right) = k\_1 c\_0 + r\_1$$

can be averaged over the periodic cell to obtain the *effective* equation

$$
\nabla\_{\mathbf{x}\_{0}} \cdot \left( \mathbf{D} \nabla\_{\mathbf{x}\_{0}} \overline{c\_{0}} \right) = \overline{f\_{0}(\mathbf{x}\_{0}, \mathbf{x}\_{1})} - \frac{\alpha}{\phi} \left( k\_{1} \overline{c\_{0}}^{\Gamma} + r\_{1} \right), \tag{5.68}
$$

where *α* is the specific surface area, *φ* is the porosity of the porous material, **D** is the (tensorial anisotropic) effective diffusion coefficient **D** = *DI* + ∇**x**1**w** that is computed from the cell-problem defined above. This gives us a macroscopic governing equation for *c*<sup>0</sup> + *c*<sup>1</sup> (which, in this case, is equivalent to *c*0, being it constant over **x**1, and *c*<sup>1</sup> with mean zero), where surface terms integrated over the surface (*c*<sup>0</sup> is, in fact, a surface average which, in this case, is again equivalent to *c*0) appear in the right-hand side, together with *f*0, as bulk reaction terms.

<sup>4</sup>Which, in this case, is a unique and exact decomposition since this equation is defined up to an additive constant, *c*1, and a multiplicative constant, ∇**x**<sup>0</sup> *c*0.

# **Physical Interpretation and Limitations**

With respect to other upscaling techniques, the steps performed above do not to have a direct physical motivation and, therefore, it might be hard to understand and assess the validity of the underlying assumptions. However there are a few considerations that can be made:


<sup>5</sup>Also a source term on the right-hand side would appear to counterbalance advection or faster reaction and obtain a periodic solution.

only as generic way of discovering emerging macroscopic equations. For more complex models, it is usually advisable to first tackle the problem with more phenomenological macroscopic models based, for example, on mixture theory and variation principles, using conservation laws and the second law of thermodynamics. Complementarily, volume-averaged computational results can be used to compute the parameters of these models, while homogenisation gives directly explicit formulas for upscaled parameters. From the practical point of view, both approaches require the solution a microscopic *cell/closure* problem.

# **Appendix B: Volume/Ensemble Averaging**

An upscaling approach, developed extensively for porous media in the last 30 years [2, 128–130], is the theory of volume-averaging with its many variants. This is conceptually equivalent to the techniques more commonly used in fluid dynamics, turbulence, combustion and multiphase flows [82, 131, 132], such as the large eddy simulation. Closer instead to the Reynolds averaging in the RANS equations is instead the concept of ensemble averaging used predominantly in a surprisingly abundant amount of theoretical work in stochastic hydrology [11, 12, 133, 134]. These approaches have been separately developed by the porous media and fluid dynamics communities without significant connections. While it is out of the scope of this chapter to explain any of these approaches in detail, we offer here a very brief introduction and some general comments about the applicability, similarities, differences and meaning of these methodologies. The interested reader is referred to the previously cited works and to the review [135].

Both spatial averaging and ensemble/perturbation methods rely on the following steps:

• Write the heterogeneous coefficients of the governing equation (e.g., velocity field, diffusion/dispersion coefficient) with a simple decomposition into a mean and fluctuating term, which for a generic quantity *g* reads

$$\log(\mathbf{x}) = \int\_{\Omega} w(\mathbf{y}) \mathbf{g}(\mathbf{x} + \mathbf{y}) d\mathbf{y} + \tilde{\mathbf{g}}(\mathbf{x}) = \overline{\mathbf{g}}(\mathbf{x}) + \tilde{\mathbf{g}}(\mathbf{x})$$

for volume-averaging (better denoted, in this formulation, as generalised volumeaveraging or *space convolution filter*), where *w(***x***)* is function with a compact and localised support such that *w(***y***)*d**y** = 1. The size of the support, , which defines the averaging length scale, should be related to the REV size (when the REV exists). Alternatively, for ensemble averaging, assuming that now *g* is a random field, we can similarly define

$$\operatorname{g}(\mathbf{x},\omega) = \int\_{\omega} \operatorname{g}(\mathbf{x},\omega) \mathrm{d}\mu(\mathbf{x},\omega) + \operatorname{g}'(\mathbf{x}) = \left\langle \mathbf{g} \right\rangle(\mathbf{x}) + \operatorname{g}'(\mathbf{x}),$$

where *μ(***x***, ω)* is the probability associated with the random field measure, *ω* is a random event. While the first averaging is well-defined both for pore-scale (i.e., perforated) and macro-scale (i.e., continuous) domains, the second one strictly applies only to macro-scale (e.g., permeability) heterogeneities, although extensions are possible.

• Applying the averaging operator to a generic transport equation for the field *c*, an unclosed equation for the mean (which we denote here generally with *c* for both averages) is found, where the unclosed terms are either non-linear terms (for both volume and ensemble averages) or boundary terms (for volume-averaging). In fact, while in standard single-phase turbulence modelling, the averaging volume only contains fluid, in porous media the so-called *spatial averaging theorem* applies

$$
\overline{\nabla c} = \nabla \overline{c} + \frac{\alpha}{\varepsilon} \overline{c} \overline{\mathbf{n}}^{\Gamma},
$$

where, as discussed in the homogenisation section, · denotes an average over the fluid–solid interface, and **n** the normal to the surface. A generalisation is possible when dealing with generic multiphase systems whose interfaces between phases can be mobile. In that case, a more complex treatment has been used to derive upscaled equations for all phases [2, 131, 136].


$$c = c\_0 + c\_1 + c\_2 + \dots,$$

where, for example, *c*<sup>0</sup> is obtained by neglecting all unclosed terms in the equation for *c*.

<sup>6</sup>Here is where the hypotheses on the porous media structure are introduced, through estimates of the (tensorial) spatial moments **ny***<sup>j</sup>* with *<sup>j</sup>* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,...* denoting the order of the tensorial product and **y** being the spatial coordinate.

5 Upscaling Flow and Transport Processes 171

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 Recent Developments in Particle Tracking Diagnostics for Turbulence Research**

**Nathanaël Machicoane, Peter D. Huck, Alicia Clark, Alberto Aliseda, Romain Volk, and Mickaël Bourgoin**

# **6.1 Introduction**

Flow velocity measurements based on the analysis of the motion of particles imaged with digital cameras have become the most commonly used measurement technique in contemporary fluid mechanics research [1, 2]. *Particle image velocimetry* (PIV) and *particle tracking velocimetry* (PTV) are two widely used methods that enable the characterisation of a flow based on the motion of particles, from Eulerian (PIV) or Lagrangian (PTV) points of view. Several aspects influence the accuracy and reliability of the measurements obtained with these techniques [2]: resolution (temporal and spatial), dynamical range, the capacity to measure 2D or 3D components of velocity in a 2D or 3D fluid domain, statistical convergence, etc. These imaging and analysis considerations depend on the hardware (camera resolution, repetition rate, on-board memory, optical system, etc.) but also on the software (optical calibration relating real-world coordinates to pixel coordinates, particle identification and tracking algorithms, image correlation, dynamical postprocessing, etc.) used in the measurements. In this context, particle tracking velocimetry can provide highly resolved, spatially and temporally, measurements of the flow velocity (if the particles are flow tracers) or of particle velocities (if the particles immersed in the flow have their own dynamics) in experimental

P. D. Huck

University of Washington, Department of Mechanical Engineering, Seattle, WA, USA

Laboratoire de Physique, ENS de Lyon, CNRS and Université de Lyon, Lyon, France

N. Machicoane · A. Clark · A. Aliseda

University of Washington, Department of Mechanical Engineering, Seattle, WA, USA

R. Volk · M. Bourgoin (-)

Laboratoire de Physique, ENS de Lyon, CNRS and Université de Lyon, Lyon, France e-mail: mickael.bourgoin@ens-lyon.fr

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_6

fluid mechanics research and applications [2–5]. A frequent implementation of this method in the laboratory is based on taking a pair of images (with double exposure cameras, typical of PIV) in rapid succession followed by a larger time interval before the next pair of images. A second common implementation of this method starts with the capture of a long sequence of images, all equally separated by a small time interval (with high-speed cameras). In the first case, the particle tracking velocimetry technique provides a single vector per particle in a pair of consecutive images, with subsequent velocity measurements in other image pairs being uncorrelated. The high-speed image sequence, on the contrary, provides the opportunity to track the same particle over multiple (*n*) images and provides several (*n*-1) correlated velocity (or *n*-2 acceleration) measurements, at different locations but along the same particle trajectory.

There are three recent contributions implemented by the authors and summarised in this chapter that apply equally to both versions of the particle tracking velocimetry technique: each one advances important aspects in one of the stages of the measurement of velocity from particle images. The first contribution (Sect. 6.2) provides an optical-model-free calibration technique for multi-camera particle tracking velocimetry and potentially also for particle image velocimetry. This method is simpler to apply and provides equal or better results than the pinhole camera model originally proposed by Tsai in 1987 [6]. In the context of particle tracking with applications in fluid mechanics, particle centre detection and tracking algorithms have been the focus of more studies [7, 8] than optical calibration and 3D position determination. Although many strategies with various degrees of complexity have been developed for camera calibration [9–13], most existing experimental implementations of multi-camera particle tracking use Tsai pinhole camera model as the basis for calibration. Using plane-by-plane transformations, it defines an interpolant that connects each point in the camera sensor to the actual light beam across the measurement volume. As it does not rely on any a priori model, the method easily handles potential complexity and non-linearity in an optical setup while remaining computationally efficient in stereo-matching 3D data. In opposition, Tsai approach, sketched in Fig. 6.1, is based on the development on a physical model for the cameras arrangement with several parameters (the number depending on the complexity). The model assumes that all ray of light received on the camera sensor pass through an optical centre (pinhole) for each camera. The quality of the inferred transformation will therefore be sensitive to variations of the setup leading to calibration data which may no longer match the model due to optical distortions, for instance. Besides, Tsai model requires non-linear elements to account for each aspect of the optical path. In practice, realistic experimental setups are either complex and time-consuming to model via individual optical elements in the Tsai method or over-simplified by ignoring certain elements such as windows, or compound lenses, with loss of accuracy.

The second contribution (Sect. 6.3) addresses the reconstruction of trajectories from the set of particle positions detected in the image sequence, an important aspect of particle tracking velocimetry [8, 14–17]. It describes the practical implementation of two recent developments: shadow particle velocimetry using parallel light

**Fig. 6.1** Sketch of Tsai pinhole camera model and stereo-matching: the position of a particle in the real world corresponds to the intersection of 2 lines <sup>1</sup> and 2, each emitted by the camera centres *O*<sup>1</sup> and *O*<sup>2</sup> and passing through the position of particles *P*<sup>1</sup> and *P*<sup>2</sup> detected on each camera plane <sup>1</sup> and <sup>2</sup>

combined with pattern tracking [18, 19] and trajectory reconstruction based on an extension of the four-frame best estimate (4BE) method. While the former was developed originally to access the size, orientation, or shape of the tracked particles, the latter is an extension of previous tracking algorithms [17] (which also extended previous algorithms) and which can be easily implemented as an add-on to an existing tracking code.

Finally, Sect. 6.4 describes a method to estimate noiseless velocity and acceleration statistics from particle tracking velocimetry tracks. This is a crucial step because imaging techniques may introduce noise into the detection of particle centres, which is then amplified when computing successive temporal or spatial derivatives. The position signal is then usually time-filtered prior to differentiation [5, 20], a procedure that increases the signal-to-noise ratio at the cost of signal alteration. The method described here, inspired by work in this area [21, 22], is based on computing the statistics of the particles displacements with increasing time lag, does not require any kind of filter, and allows for the estimation of noiseless statistical quantities both in the Lagrangian framework (velocity and acceleration time correlation functions) and in the Eulerian framework (statistics of spatial velocity increments) [23, 24].

Note that this chapter does not intend to review all the possible extensions of particle tracking velocimetry and has been limited to some recent developments from the authors' groups, which we believe can be useful and easily implemented to improve the accuracy of already operational PTV systems in other groups or which may help users developing new PTV experiments. Many other interesting advances have been developed over the past decade. We can, for instance, mention the use of inverse-problem digital holography [25–27], which allows to track particles in 3D with one single camera, new algorithms allowing to track particles in highly seeded flows such as the shake the box method [28] or the tracking of particles with rotational dynamics [29, 30], which allows to investigate simultaneously the translation and rotation of large objects transported in a flow.

# **6.2 A Model-Free Calibration Method**

# *6.2.1 Principle*

3D particle imaging methods require an appropriate calibration method to perform the stereo-matching between the 2D positions of particles in the *pixel coordinate system* for each camera and their absolute 3D positions in the *real-world coordinate system*. The accuracy of the calibration method directly impacts the accuracy of the 3D positioning of the particles in real-world coordinates.

The calibration method proposed here (further discussed in [31]) is based on the simple idea that no matter how distorted a recorded image is, each bright point on the pixel array is associated with the ray of light that produced it. As such, the corresponding light source (typically a scatterer particle) can lie anywhere on this ray of light. An appropriate calibration method should be able to directly attribute to a given doublet *(xp, yp)* of pixel coordinates its corresponding ray path. If the index of refraction in the measurement volume of interest is uniform (so that light propagates along a straight line inside the measurement volume) each doublet *(xp, yp)* can be associated with a straight line *d* (defined by 6 parameters in 3D: a position vector **O***(xp, yp)* and a displacement vector **V***(xp, yp)*), regardless of the path outside the volume of interest, which can be very complex as material interfaces and lenses are traversed. The calibration method described here builds a *pixel-to-line* interpolant I that implements this correspondence between pixel coordinates and each of the 6 parameters of the ray of light: *(xp, yp)* <sup>I</sup> −→ *(***O***,* **V***)*. While this method may seem similar to Tsai approach which also designates a ray of light for each doublet *(xp, yp)*, there is a significant difference in that Tsai approach assumes a camera model and is sensitive to deviations in the actual setup from this idealised optical model. The proposed approach does not rely on any *a priori* model and is only based on empirical interpolations from the actual calibration data. Thus, the new method implicitly takes into account optical imperfections, media inhomogeneities (outside the measurement volume) or complex lens arrangements. Additionally, the generalisation of the method to cases where light does not propagate in a straight line is straightforward: it is sufficient to build the interpolant with the parameters required to describe the expected curved path of light in the medium of interest (for instance, a parabola in the case of linear stratification).

# *6.2.2 Practical Implementation*

An implementation of the method proposed is used to build the interpolant I from experimental images of a calibration target with known patterns at known positions. The process described here concerns only one camera for clarity. In general, in a realistic multi-camera system, the protocol has to be repeated for each camera independently.

#### 6 Recent Developments in Particle Tracking 181

A calibration target, consisting of a grid of equally separated dots, is translated perpendicularly to its plane (along the *OZ* axis) using a micropositioning stage, and is imaged at several known *Z* positions by every camera simultaneously. In total, *NZ* images are taken by each camera: *Ij* is the calibration image when the plane is at position *Zj* (with *j* ∈ [1*, NZ*]). For an example highlighting the quality of the calibration method, *NZ* = 13 planes were collected across the measurement volume. The calibration protocol, sketched in Fig. 6.2, then proceeds as follows:

**Fig. 6.2** Sketch of the calibration method. (**a**) Image *Ij* of the calibration target (over *Nx*×*Ny* pixels) located at one position *Zj* (in real-world coordinates). From this image the centre of the images of the calibration dots in pixel coordinates (*(x<sup>k</sup> <sup>j</sup> , y<sup>k</sup> <sup>j</sup> )k*∈[1;*Nj* ]) is determined. (**b**) Corresponding known location of the centre of the calibration points in real-world coordinates (*(X<sup>k</sup> <sup>j</sup> , X<sup>k</sup> <sup>j</sup> )k*∈[1;*Nj* ]). From *(x<sup>k</sup> <sup>j</sup> , y<sup>k</sup> <sup>j</sup> )k*∈[1;*Nj* ] and *(X<sup>k</sup> <sup>j</sup> , X<sup>k</sup> <sup>j</sup> )k*∈[1;*Nj* ], the coefficients of the transformation T*<sup>j</sup>* connecting pixel and real-world coordinates of the target located at *Zj* are evaluated (the procedure is repeated for several target positions *Zj*∈[1*,Nz*]) using least squares methods [32]. From a practical point of view the transformations T*<sup>j</sup>* can be easily determined using ready to use algorithms, such as the fitgeotrans function in Matlab<sup>R</sup> . Note that for the simplicity of the illustration of the method, we show here a situation with no optical distortion and no perspective deformation, where the plane-by-plane transformation T*<sup>j</sup>* is just given by a magnification factor *Mj* between pixel and real-world coordinates. In an actual experiment, perspective effects would require at least a linear projective transformation, defined by a 2×2 matrix *<sup>M</sup>αβ <sup>j</sup>* with at least 4 coefficients to be estimated for each plane position *Zj* . More realistic situations would require higher order polynomial transformations including a larger number of coefficients [32]; a third polynomial transformation embeds, for instance, 10 coefficients per plane). (**c**) Stacks of calibration planes at 3 different positions (*Zj*=1*,*2*,*3) in 3D real-world coordinates (for simplicity, only 3 planes are illustrated, although in an actual calibration more planes may be used for better accuracy). The 3 coloured crosses illustrate the 3 projections (one on each of the 3 planes, the colour of the points corresponds to the colour of the plane onto which it is projected) in real-world coordinates (*(X, Y, Z)j*=1*,*2*,*3) of an arbitrary point *(x, y)* in pixel coordinates to which the 3 transformations T*j*=1*,*2*,*<sup>3</sup> have been applied. These projections are distributed along a path of light corresponding to the line in realworld coordinates that projects onto the point *(x, y)* in the camera pixel coordinates. Since in a homogeneous medium light propagates in straight lines, the path of light is simply determined by a linear fit (dashed line), in 3D real-world coordinates, of the three points (*(X, Y, Z)j*=1*,*2*,*3). Using more calibration planes leads to more points for the linear fit and hence to a better accuracy. This procedure then directly connects the pixel coordinate *(x, y)* into the corresponding ray of light that produces it. Note that the fit is only done within the calibration volume where the target is translated along the *Nz* planes and does not extend to the cameras


Stereo-matching, or finding the 3D position of a point (or particle), is performed by finding a set of rays from each camera that cross (or almost cross) in the vicinity of the same spot in the volume of interest. The most probable 3D location of the corresponding particle is then taken as the 3D position that minimises the total distance to all those rays. The interpolant described in the method is created using every pixel in the cameras, as this step is done only once, but the method can be applied with a subset of the pixel array. For a setup with moderate optical distortion, a loose interpolating grid with a few hundreds points (typically, 20 × 20) is largely sufficient. As a matter of fact, using the interpolant is not mandatory, as all the calibration information is embedded in the planeby-plane transformations. Third-order polynomial plane-by-plane transformation embeds 10 parameters each (5 polynomial coefficients for each of the *X* and *Y* transformations). If, instead, 7 calibration planes are used, the calibration information embeds about 70 parameters in total. Using the interpolant approach is above all a practical solution, while the interpolation information embeds a massive number of hidden parameters (6 per interpolation point) and is therefore expected to be highly redundant. Therefore, it is generally unnecessary to build the interpolant on a too refined grid (however, the added computational cost is minimal as the interpolant is only built once per calibration procedure, and can be stored in a small file for later use). This may happen for systems with important small-scale and heterogeneous optical distortions, in which case higher order plane-by-plane transformations (hence embedding more parameters) would also be necessary.

# *6.2.3 Results: Comparison with Tsai Model*

The calibration procedure proposed by Tsai [6] has been widely used to recover the optical characteristics of an imaging system to reconstruct the 3D position of an object. The accuracy of the proposed imaging calibration procedure is assessed by comparing it with a simple implementation of Tsai model. A camera model accounting only for radial distortion is used. While improved optical elements in Tsai model could increase the accuracy, they come at an increased operator workload.

Our stereoscopic optical arrangement (see Refs. [31, 33] for more details), typical of PTV in a 1 cm thick laser sheet, focuses on the geometrical centre of a water flow inside an icosahedron, with both cameras objectives mounted in a Scheimpflug configuration. A plate mounted parallel to the laser sheet with 2 mm dots, attached to a micrometric traverse (with 10μm accuracy), is used as a target. Both calibration methods use 13 target images, 1 mm apart from each other along the Z axis.

The calibration method uses the 2D positions of the target dots, and provides a series of positions that cannot exactly match the 3D real coordinates because, in both methods, the model parameters are obtained by solving an over-constrained linear system in the least-square sense. The calibration error, i.e., the absolute difference between the (known) real coordinates and the transformed ones, is computed to evaluate the calibration accuracy. This error can be estimated along each direction or as a norm: *d* = + *d*2 *<sup>X</sup>* <sup>+</sup> *<sup>d</sup>*<sup>2</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>d</sup>*<sup>2</sup> *<sup>Z</sup>* (Table 6.1). Figure 6.3 plots the total 3D error averaged over the 13 planes used, for both the proposed method and Tsai model.

The accuracy of the proposed calibration is superior to that of the Tsai method (in its simplest implementation). The error is at least 300% smaller (depending on which component is considered) and is reduced to barely 0.5 pixel. It is important to note that the error map obtained with the Tsai method (Fig. 6.3b) seems to display a large bias along *Y* that could be due to the use of Scheimpflug mounts, which are typically not included in this Tsai calibration, and to the angle between the cameras and the tank windows. This hypothesis was verified by comparing the two calibrations procedures in more conventional conditions, where they give similar results with a very small error.

For the present optical arrangement and the new calibration method, the error in the *Y* positioning is the smallest. Indeed, due to the shape of the experiment (an icosahedron), the *y* axis of the camera sensor is almost aligned with the *Y* direction so that this coordinate is fully redundant between the cameras, while the *x* axes of each camera sensor form an angle *α π/*3 with the *X* direction so that the precision on *X* positioning is lower. This directly impacts the precision on the *Z* positioning, whose error is almost equal to the *X* positioning error.


**Table 6.1** Spatial average of the absolute deviation from the expected position of the targets

**Fig. 6.3** Calibration error averaged along *Z* using the proposed calibration method (**a**) or Tsai model (**b**)

# *6.2.4 Discussion*

Up to 13 planes were used to build the operator that yields the camera calibration. While two planes are the minimum required for the method, a larger number of planes imaged provide better accuracy. In this case study, the major sources of optical distortion were the Scheimpflug mounts, the imperfect lenses, and the non-perpendicular interfaces. 7 planes provided an optimal trade-off between high accuracy and simplicity, with an error only 2% larger than the 13 planes setup, while using only 3 planes yields an 10% larger error. The fact that few planes are sufficient to obtain a good accuracy of the calibration is likely related to the fact that the thirdorder polynomial plane-by-plane transformations are sufficient to handle most of the distortions, including those originating from the optics, from the tilt and shift system and from the refraction at the air–water interface, so that the projection of a pixel position to real space is accurately aligned along a line which defines the corresponding line of sight. Few points are then needed to accurately fit the line parameters (using more points essentially ensures a more robust fit with respect to small errors in the plane-by-plane transformations). When dealing with a more complex experiment, i.e., with a refraction index gradient, increasing the number of planes in the calibration would improve the results allowing to accurately capture the curvature of the light rays.

The proposed calibration method has several advantages that make it worth implementing in a multi-camera particle imaging setup. First, it requires no model or assumption about the properties of the optical path followed by the light in the different media outside the volume of interest. It only requires light to propagate in straight line. The method simply computes the equation for propagation of light in space. This ray line equation is fully determined by the physical location of the calibration dots located at known positions in space. Note that the present calibration method is versatile enough so that the linear propagation constraint can be easily relaxed. This can be useful, for instance, to calibrate stratified flows, with spatial variations of optical index. It is then sufficient to change the linear fit used to determine the line of sight (from the projected pixel coordinates to the planes), by an appropriate curved path of light (a polynomial fit may often be a good enough approximation). Second, this method is turnkey for any typical optical system. The implementation of the new method is easily done and can be used retroactively using previous calibration images.

Let us briefly discuss the improved accuracy of the calibration, compared to the model of Tsai. The reason for the improved accuracy is mainly hidden in the higher number of (hidden) parameters actually defining both calibration methods. As pointed out earlier, in the new proposed calibration all the calibration parameters are embedded in the plane-by-plane transformations, with 10 parameters for each third-order polynomial transformation. Using 13 calibration planes ends up with 130 hidden calibrating parameters. These reduce to 70 when using 7 planes. In any case this is much larger than the number of parameters embedded in the Tsai model (which has typically 6 external parameters defining the position and the orientation of the equivalent pinhole camera) and several internal parameters (focal length, pixel aspect ratio, optical distortion parameters, etc.), typically of order 10. It is therefore not surprising that the present method gives better accuracy. Note also that the present comparison may be unfair to the Tsai model, as we have not considered more sophisticated pinhole camera models, properly accounting, for instance, for tilt and shift corrections, and which would naturally embed a larger number of parameters and an increased accuracy. Such extension of the pinhole approach is based on sophisticated physical and geometrical models, with algorithms that tend to be tedious to implement. A big advantage of the present calibration is its versatility and ease of algorithmic implementation, which remains identical whatever the complexity of the optical path. Finally, note that while the proposed method has a larger number of parameter, they only come from empirical determination and are obtained automatically through the calibration process, and there is no need to prescribe *a priori* a set of parameters tightened to a specific model requiring choices from the user. This makes the method not only more accurate but also adaptable and objective.

To conclude, the model-free calibration method proposed can be easily implemented with both the calibration image acquisition and spatial detection of target points currently standard in the field. The calibration algorithm and the operator calculation to convert pixel locations to physical locations, with minimal errors, can easily be programmed in any language available to experimentalists (the reader can contact the authors for source codes to implement the calibration algorithms). The new method is at least equally, and frequently more, accurate than the commonly used Tsai model, and it can be used more easily and in a wider range of optical configurations. As experimental setups become more complicated with more optical and light refraction elements, this method should prove simpler to implement and more accurate than the model-based Tsai one.

# **6.3 Particle Tracking Algorithms**

Section 6.3.1 describes the implementation of particle tracking velocimetry in a von Kármán flow using parallel light beams and two cameras forming an angle of 90◦. As described below, the originality of this implementation of PTV is in the combination of parallel illumination and of pattern tracking (rather than particle tracking), which makes the calibration and the matching particularly simple and accurate. It is well suited to the tracking of small objects in a large volume using only two standard LEDs as light sources. In this setup, tracking is performed independently on the 2 views using a nearest neighbour algorithm prior to stereomatching 2D tracks. Section 6.3.2 describes recent improvements of the tracking algorithms which use more than two consecutive frames in order to increase track lengths.

# *6.3.1 Shadow Particle Tracking Velocimetry*

### **Experimental Setup**

Particle tracking has been performed in a tank with a 15 cm × 15 cm square crosssection, where a von Kármán flow is created between two bladed discs, of radius *R* = 7*.*1 cm and separated by 20 cm, counter-rotating at constant frequency (Fig. 6.4a). The flow has a strong mean spatial structure arising from the counterrotation of the discs. The azimuthal component resulting from this forcing is of order 2*πR* near the discs' edge and zero in the mid-plane (*z* = 0), creating a strong axial gradient (Fig. 6.4a). The discs also act as centrifugal pumps ejecting fluid radially outward in their vicinity, resulting in a large-scale poloidal recirculation with a stagnation point in the geometrical centre of the cylinder (Fig. 6.4b). Using water to dilute an industrial lubricant, UconTM, a mixture with a viscosity *<sup>ν</sup>* <sup>=</sup> <sup>8</sup>*.*2 10−<sup>6</sup> m2 <sup>s</sup>−<sup>1</sup> and a density of *<sup>ρ</sup>* <sup>=</sup> 1000 kg m−<sup>3</sup> allows for the production of an intense turbulence with a Taylor-based Reynolds number *Rλ* = 200 and a dissipative length scale *η* = 130 microns (see Table 6.2 for more details on

**Fig. 6.4** (**a**) Sketch of the counter-rotating von Kármán flow. Arrows indicate the topology of the mean flow, the dashed line indicates the mid-plane of the vessel. (**b**) Schematic cut of the vessel along the *(z, x)* or *(z, y)* plane. (**c**) Optical setup for S-PTV with 2 identical optical arrangements forming an angle *θ* = 90 degrees (only the vertical arm is described). The 1W LED source is imaged in the focus of a parabolic mirror to form a large collimated beam. A converging lens and a diaphragm are used to make the LED a better point-like source of light. Light propagates through the flow volume passing through a beam splitter (BS) before being collected using a 15 cm large lens that redirects the collimated light into the camera objective. The optical system [*L*2+objective] is focused on the camera sides of the vessel, marked with a dashed-dotted line


5.5 0.50 0.49 0.33 0.45 2.0 128 2.1 190 21,200 6.9 0.62 0.62 0.41 0.56 1.5 111 3.6 225 26,700

**Table 6.2** Parameters of the flow

, rotation rate of the discs; the dissipative time scale is estimated from the zero-crossing (*t*<sup>0</sup> = *(t*0*<sup>x</sup>* + *t*0*<sup>y</sup>* + *t*0*z)*) of the acceleration auto-correlation functions: *t*<sup>0</sup> 2*.*2*τη* [36], the dissipation rate *<sup>ε</sup>* is estimated as *<sup>ε</sup>* <sup>=</sup> *ν/τ* <sup>2</sup> *<sup>η</sup>* , and the dissipative length scale is *<sup>η</sup>* <sup>=</sup> *(ν*3*/ε)*1*/*4. The rms velocities are obtained at the geometrical centre of the flow using data points situated in a ball with a 1 cm radius. The Taylor-based Reynolds number is estimated as *Reλ* = + 15v4*/νε* with v = + *(*v *x* <sup>2</sup> <sup>+</sup> <sup>v</sup> *y* <sup>2</sup> <sup>+</sup> <sup>v</sup> *z* <sup>2</sup>*)/*3. The large-scale Reynolds number is *Re* <sup>=</sup> <sup>2</sup>*πR*2*/ν*. The kinematic viscosity of the water-UconTM mixture is *<sup>ν</sup>* <sup>=</sup> <sup>8</sup>*.*2 10−<sup>6</sup> m2s−<sup>1</sup> with a density *<sup>ρ</sup>* <sup>=</sup> 1000 kg m−<sup>3</sup>

the flow parameters). This setup allows for the tracking of Lagrangian tracers (250μm polystyrene particles with density *ρp* <sup>=</sup> 1060 kg m−3) in a large volume <sup>6</sup> <sup>×</sup> <sup>6</sup> <sup>×</sup> <sup>5</sup>*.*5 cm3 centred around the geometrical centre of the flow (*(x, y, z)* <sup>=</sup> *(*0*,* 0*,* 0*)*). Two high-speed video cameras (Phantom V.12, Vision Research, Wayne, NJ.) with a resolution of 800 × 768 pixels, and a frame rate up to *fs* = 12 kHz are used. This sampling frequency is sufficient to resolve particle accelerations, calculated by taking the second derivative of the trajectories.

The camera setup uses a classical ombroscopy configuration [34], with parallel illumination. We have recently used such a setup (depicted in Fig. 6.4c) for Lagrangian studies of turbulence [35]; we will use the data from this experiment to illustrate the present section. It consists of 2 identical optical configurations with a small LED located at the focal point of a large parabolic mirror (15 cm diameter, 50 cm focal length) forming 2 collimated beams which are perpendicular to each other in the measurement volume. A converging lens and a diaphragm are used to make the LED a better point-like source of light. This large parallel ray of light then reflects on a beam splitter and intersects the flow volume before being collected by the camera sensor using a doublet consisting of a large lens (15 cm in diameter, 50 cm focal length) and a 85 mm macro camera objective. All optical elements are aligned using large (homemade) reticles, which also precisely measure the magnification in each arrangement. When placing an object in the field of view, it appears as a black shadow on a white background, corresponding to the parallel projection of the object on the sensor. Thanks to the parallel illumination, the system has telecentric properties. The particle size and shape do not depend then on the object-to-camera distance, as opposed to classical lighting schemes where due to perspective the apparent object size changes with the object-to-camera distance. The telecentricity also makes the calibration of each camera trivial as there is a simple, unique, and homogeneous magnification factor relating the *(x, y)* pixel coordinates to the *(X, Z)* real-world coordinates for one camera and to *(Y, Z)* real-world coordinates for the other camera. In addition, the optical arrangement is rigorously implemented so that the *Z* real-world coordinate is exactly redundant between the 2 cameras. This makes the matching step (detailed below) both simple and accurate. When particles are tracked, camera 1 will provide their *(x*1*, z*1*)* 2D positions, while camera 2 will measure their *(y*2*, z*2*)* positions. As the *z* coordinate is redundant, a simple equation *z*<sup>2</sup> = *az*<sup>1</sup> + *b* accounts for slight differences in the magnification and centring between both arrangements.

### **The Trajectory Stereo-Matching Approach**

Given the magnification of the setup (1*/*4, 1 px equals 90 μm), the depth of field of the optical arrangement is larger than the experiment. As both beams do not overlap in the entire flow domain, particles situated in one light beam but outside the common measurement volume can give a well-contrasted image on one camera while not being seen by the other. Such a situation could lead to an incorrect stereomatching event when many particles are present. This is illustrated in Fig. 6.5a,

**Fig. 6.5** (**a**) Scheme of the intersecting parallel light beams showing individual particle stereomatching is not reliable. The black dots are two particles at the same *z* position outside of the beams overlapping region and the dashed circle is a particle at the same *z* position within the region (both situations being measured identically by the cameras). (**b**) Time evolution of the raw *z* (redundant) coordinate of the same particles as obtained with 2D tracking with camera 1 and camera 2. Only 38 matched trajectories are plotted. (**c**) Affine relation between *z*<sup>2</sup> = *az*<sup>2</sup> + *b* (*<sup>a</sup>* <sup>=</sup> <sup>0</sup>*.*98, *<sup>b</sup>* <sup>=</sup> <sup>15</sup>*.*6 px) measured with 1900 trajectories corresponding to 6 <sup>×</sup> 105 data points. (**d**) A random sample of 150 trajectories in the vessel obtained from the same movie

where the shadows left by two particles situated at the same *z* position but outside of the beams overlap (black dots) could be interpreted as one "ghost" particle within the overlapping region (dashed circle). To mitigate these errors, we construct 2D trajectories for each camera using the *(x*1*, z*1*)* and *(y*2*, z*2*)* coordinates separately. Once tracked in time, these trajectories, instead of individual particle positions, may be stereo-matched. This approach is similar to the "pattern matching" originally proposed by Guezennec et al. [16], in contrast with the particle-matching strategy, used in many recent studies, which perform stereo-matching on individual particles before tracking. The advantage of this method, in particular when it is combined with telecentric illumination, is that neither stereo-matching nor tracking errors are made, as will be detailed below. However, one must track many more 2D trajectories that are stereo-matched. Another drawback is the projection of 3D positions into a plane, which strongly decreases the inter-particle distance making the apparent particle overlap an issue when the particle diameter becomes large with respect to the effective measurement volume. However, the presence of redundancy in the *z* coordinate may be used to overcome such indetermination when the apparent proximity results only from the projection.

We implement a 2D tracking scheme using a simple method inspired from previous works [8, 17, 20]. This tracking procedure searches for particles in frame *n* + 1 whose distance from particles in frame n is smaller than a specified value. If only one particle is found in the vicinity of the last point of a track, this track is continued. When multiple candidates are found, the track is stopped and new tracks are initiated with these new particles. Particles in frame n+1 which do not match with any of the existing tracks in frame n initiate new trajectories. This procedure, whose improvement is described in the next subsection, results in a collection of 2D trajectories with various lengths.

Stereo-matching is then performed by identifying trajectories with *z*1*(t) z*2*(t)* using the relation *z*<sup>2</sup> = *az*<sup>1</sup> + *b* as shown in Fig. 6.5b. This calibration relation is determined recursively using a dilute ensemble of particles for which the initial identification of a single pair of 2D trajectories gives a first estimate of the relationship between *z*<sup>2</sup> and *z*1. As more trajectories are found, the affine relationship is refined until the maximum possible amount of trajectories for a single experiment is obtained. In this recursive manner, the tracking algorithm is self-calibrating. Here, the parameters are *a* = 0*.*98, *b* = 15*.*6 px estimated from 1900 matched trajectories, corresponding to 6 106 data points as shown in Fig. 6.5c. Together with the pixel-to-mm conversion from one of the cameras, this method provides all relevant information about particle positions in world coordinates. Note that the temporal support for the 2D tracks *z*1*(t)* and *z*2*(t)* for a given particle may not be identical (the track may be longer on one camera than on the other or may start and end at slightly different times). When it comes to analysing 3D Lagrangian statistics, only the portions of trajectories over a common temporal interval are kept. In addition, only trajectories with sufficient temporal overlap (typically 70 time-steps, i.e., approximately 2*.*5*τη*) are matched, in order to prevent anomalous trajectories due to possible ambiguities when matching short patterns. Such an occurrence becomes increasingly unlikely as the trajectory duration threshold is increased. A false trajectory can only occur when the relationship *z*<sup>2</sup> = *az*<sup>1</sup> + *b* becomes undetermined, which may happen, for instance, when two particles are close to colliding and the matching of the two nearby particles becomes ambiguous. Such a situation remains however an extraordinarily rare event in dilute situations. After tracking and stereo-matching, each pair of movies gives an ensemble of trajectories from which all single particle statistics can be computed as shown in Fig. 6.5d.

### **Flow Measurements**

Measurements are performed in a volume (6 <sup>×</sup> <sup>6</sup> <sup>×</sup> <sup>5</sup>*.*5 cm3) larger than one integral scale (*Lv* <sup>=</sup> <sup>v</sup>3−<sup>1</sup> <sup>4</sup>*.*8 cm) of an inhomogeneous flow. As the statistics are subsampled spatially and temporally, a large number of trajectories are then needed to achieve a good statistical convergence. We record 200 sets of movies with a duration of 1*.*3 s at 12 kHz and obtain O*(*1000*)* tracer trajectories per set. A statistical ensemble of <sup>O</sup>*(*105*)* trajectories with mean durations *t* ∼ <sup>0</sup>*.*25*/* permits the spatial convergence of both Eulerian and Lagrangian statistics. The flow properties are obtained from the PTV data and are given in Table 6.2 together with the energy dissipation *ε*. The latter quantity is estimated by calculating the zero-crossing time *τ*<sup>0</sup> of the acceleration auto-correlation curves which is empirically known to be related to the Kolmogorov time scale *τη* (*τ*<sup>0</sup> 2*.*2*τη*) [36] and thus, energy dissipation. The fluctuating velocity of the flow is found to be proportional to the propeller frequency (Table 6.2) due to inertial steering at the bladed discs which forces the turbulence that becomes full-developed, provided *Re* <sup>=</sup> <sup>2</sup>*πR*2 *ν >* 3300 [37]. In what follows, we focus our analysis on the case = 5*.*5 Hz.

of trajectories, each containing the temporal evolution of the Lagrangian velocity at the particle position. Based on this ensemble of trajectories, one may reconstruct the mean velocity field in 3D,

$$
\langle \mathbf{v} \rangle (\mathbf{x}, \, \mathbf{y}, \, z) = (\langle v\_{\mathbf{x}} \rangle, \, \langle v\_{\mathbf{y}} \rangle, \, \langle v\_{\mathbf{z}} \rangle),
$$

and the rms fluctuations of each velocity component *(v <sup>x</sup> , v <sup>y</sup> , v z)*. This is achieved by an Eulerian averaging of the Lagrangian dataset on a Cartesian grid of size 123, which corresponds to a spatial resolution of 5 mm in each direction. The choice of the grid size must fulfil several criteria: it must be small compared to the typical scale of the mean flow properties (here, *Lv* ∼ 4*.*8 cm), but large enough so that statistical convergence is achieved. Here, the grid size was chosen so that there are at least O*(*1000*)* trajectories in each bin, enough to converge both mean and rms values.

Figure 6.6a, b displays two cross-sections of the reconstructed mean flow in two perpendicular planes, the mid-plane *xy* = *(x, y, z* = 0*)* and *yz* = *(x* = 0*, y, z)*, a horizontal plane containing the axis of rotation of the discs. We observe a mean flow structure that is close to the schematic view of Fig. 6.4a. The flow is almost radial and convergent with *vz* ∼ 0 in *xy* , with a *z* component which reverses under the transformation *z* → −*z* (Fig. 6.6b). We also observe a strong *y*component of the velocity in *yz* which reverses under the transformation *y* → −*y* and corresponds to the differential rotation imposed by the discs. These crosssections also reveal that the flow has the topology of a stagnation point at the geometric centre *(*0*,* 0*,* 0*)*, as was shown in another von Kármán flow with a circular section [38]. With a 3D measurement of the mean flow, it is possible to compute spatial derivatives along all directions. This leads to *∂x vx* ∼ *∂y vy* −1*.*5 for the stable directions, and *∂zvz* ∼ 3*.*0 for the unstable direction. Note that the sum of these terms must be zero because this quantity is the divergence of the mean flow. This condition is found to be well satisfied although the velocity components were computed independently without any constraint. The verification that the flow is divergence-free is then an *a posteriori* test that the reconstruction of the mean flow is physically sound. Figure 6.6c, d displays rms values of velocity fluctuations

**Fig. 6.6** Cuts of the 3D reconstructed Eulerian mean velocity (**a, b**) field and rms velocity (**c, d**). The reconstruction is achieved by computing the mean **v** and rms values *(v <sup>x</sup> , v <sup>y</sup> , v z)* of the velocity in each bin of a Cartesian grid of size 123. (**a**) *xy* <sup>=</sup> *(x, y, z* <sup>=</sup> <sup>0</sup>*)* plane. Arrows are *(vx ,vy )*, the colour coding for the *vz*. (**b**) *yz* = *(x* = 0*, y, z)* plane. Arrows are *(vy ,vz)*, the colour coding for the *vx* . (**c**) rms value of velocity fluctuations *v* = + *(v x* <sup>2</sup> <sup>+</sup> *<sup>v</sup> y* <sup>2</sup> <sup>+</sup> *<sup>v</sup> z* <sup>2</sup>*)/*3 in the *xy* = *(x, y, z* = 0*)* plane. (**d**) rms value of velocity fluctuations in the *yz* = *(x* = 0*, y, z)* plane

*v* = + *(v x* <sup>2</sup> <sup>+</sup> *<sup>v</sup> y* <sup>2</sup> <sup>+</sup> *<sup>v</sup> z* <sup>2</sup>*)/*3 in *xy* <sup>=</sup> *(x, y, z* <sup>=</sup> <sup>0</sup>*)* and *yz* <sup>=</sup> *(x* <sup>=</sup> <sup>0</sup>*, y, z)*. These maps reveal that the flow properties are anisotropic and inhomogeneous at large scales, as previously observed in similar setups [39].

# *6.3.2 Improved Four-Frame Best Estimate*

As mentioned in the previous section, using only two frames and a nearest neighbour criterion may lead to multiple candidates for a given track or wrong matches when increasing the number of particles in the field of view. To overcome such limitation, four-frame tracking methods were developed, as, for instance, the "fourframe minimal acceleration method" (4MA), developed by Maas et al. [14], which minimises the change in acceleration along the track, or the further extension by Ouellette et al., known as "four-frame best estimate" particle tracking method (4BE) which minimises the distance between the prediction of particle position two timestep forward in time and all the particles detected at that time [17]. The 4BE method was shown [17] to have an improved tracking accuracy compared to the 4MA method. The 4BE method builds on a nearest neighbour approach and three-frame tracking methods to improve tracking performance by utilising location predictions based on velocities and accelerations.

The 4BE method uses four frames (*n*−1, *n*, *n*+1, and *n*+2) to reconstruct particle trajectories, as illustrated in Fig. 6.7a. Individual tracks are initialised by using the nearest neighbour method, which minimises the distance between a particle in frame *n* − 1 and frame *n*. Once a track is started, the first two locations in the track are used to predict the position *x*˜ *n*+1 *<sup>i</sup>* of the particle in frame *n* + 1:

$$
\tilde{x}\_{l}^{n+1} = x\_{l}^{n} + \tilde{v}\_{l}^{n} \Delta t,\tag{6.1}
$$

where *x<sup>n</sup> <sup>i</sup>* is the position of the particle in frame *<sup>n</sup>*, *<sup>v</sup>*˜*<sup>n</sup> <sup>i</sup>* is the predicted velocity, and *t* is the time between frames. A search box is then drawn around this predicted location to look for candidates to continue the track. The size of the search box is set to be as small as possible (usually a few pixels) since it is expected that the actual particle location will be close to the prediction. Additionally, if the flow statistics are anisotropic, the search box can be adjusted to be larger along the axis with higher velocity fluctuations and smaller in the directions with smaller fluctuations. This decreases computational costs because it limits the number of particles found in the initial search, thus limiting possible track continuations. The particles found within this bounding box can then be used to predict a set of positions *x*˜ *n*+2 *<sup>i</sup>* in frame *n*+2:

$$
\tilde{x}\_{l}^{n+2} = x\_{l}^{n} + \tilde{v}\_{l}^{n} \left(2\Delta t\right)^{2} + \frac{1}{2}\tilde{a}\_{l}^{n} \left(2\Delta t\right)^{2},\tag{6.2}
$$

where *x<sup>n</sup> <sup>i</sup>* , *<sup>v</sup>*˜*<sup>n</sup> <sup>i</sup>* , and *t* are the same as above, and *<sup>a</sup>*˜*<sup>n</sup> <sup>i</sup>* is the predicted acceleration. As in the previous frame, *n*+1, a search box is drawn around each of the *x*˜ *n*+2 *<sup>i</sup>* predicted locations. Each of these bounding boxes is then interrogated for particles. Using these particle locations, the track is determined by minimising the cost function *φ<sup>n</sup> ij* :

$$\phi\_{lj}^{n} = ||\mathbf{x}\_{j}^{n+2} - \tilde{\mathbf{x}}\_{l}^{n+2}||. \tag{6.3}$$

Equation (6.3) minimises the distance between the actual (*xn*+<sup>2</sup> *<sup>j</sup>* ) and predicted (*x*˜ *n*+2 *<sup>i</sup>* ) particle locations, thus minimising changes in acceleration for a given track. An optional upper threshold, typically half the length of the search box, can be set on the cost function to help limit tracking error. The particle, and, therefore, the track that minimises this cost function and falls within the threshold is then defined as the correct track and all other possible tracks are discarded. It is also important to note that a track is discarded if at any point it does not contain any particles in the search box in frames *n* + 1 or *n* + 2.

While 4BE with nearest neighbour initialisation (4BE-NN) is a very good compromise between tracking accuracy and efficiency (low computational cost), there are certain cases where it starts to fail. For instance, it is not suitable for situations where the particle displacement starts to be comparable to the interparticle distance. Therefore, we have developed a modified initialisation (MI) method for 4BE (4BE-MI) that is more effective at detecting tracks than the current nearest neighbour initialisation [40]. Figure 6.7b shows the modified 4BE algorithm. This method uses a search box based on the estimated maximum particle

**Fig. 6.7** (**a**) 4BE-NN. Particle locations are denoted with filled symbols, whereas predicted particle locations are denoted with hollow symbols. The boxes represent the bounding boxes used in the algorithm. The predicted path is overlaid in the figure. (**b**) 4BE-MI. The initial bounding box (now shown in the figure) allows for more potential tracks to be examined when searching for the correct track. (**c**) Comparison of tracking performance for 4BE-NN and 4BE-MI methods. At values of *ξ <* 0*.*2, the tracking error is zero for 4BE-MI

displacement between two frames to initialise tracks. The size of this search box is determined based on the flow characteristics (instantaneous spatial-averaged velocity, velocity fluctuations in all three directions, etc.), but it is always larger than the size of the search box used for track continuation (which is only aimed at accounting for the error in evaluating the next position in the track). This allows the algorithm to explore multiple possible trajectories for each particle and eliminates the assumption that the closest particle in the next frame is the only option when starting a track. It also enables a track to be constructed based on knowledge of the flow physics as a feature of the initialisation.

The performance of the 4BE algorithm both with and without the modified initialisation scheme was analysed using direct numerical simulation (DNS) data of a turbulent channel available through the Johns Hopkins University Turbulence Databases [41]. The DNS was performed in a 8*π* × 2 × 3*π* domain using periodic boundary conditions. The Reynolds number was *Re* <sup>=</sup> *Uch <sup>ν</sup>* <sup>=</sup> <sup>2</sup>*.*<sup>2625</sup> <sup>×</sup> 104, where *Uc* and *h* are, respectively, the channel centre-line velocity and height. The flow was initially seeded with tracer particles throughout the entire volume. The particles were then advected through the channel for each time-step based on the resolved DNS flow field. The trajectories were cut in a subdomain of the channel, creating an ersatz of particle entering and leaving the measurement volume as is typical in experiments. The trajectories generated were then used to benchmark the tracking scheme by comparing tracking results to the known trajectories.

Several datasets were generated by varying the distances that the particles moved between frames. This generated data over a wide range of *ξ* , defined as the ratio of the average distance each particle moves between frames to the average separation between particles in a frame. When *ξ* is small, tracking is easy because the particles move very little between frames and there are not many particles to consider for track continuation. However, as this ratio increases, tracking becomes more difficult because the particles move a large amount between frames and there are many particles per frame. Figure 6.7c shows the tracking error *E*track plotted against *ξ* . The tracking error is defined as:

$$E\_{\text{track}} = \frac{N\_{\text{impactect}}}{N\_{\text{total}}},\tag{6.4}$$

where *N*imperfect is the total number of imperfect tracks and *N*total is the total number of tracks in the dataset generated. A perfect track must start at the same point as the actual track and must contain no spurious locations.

Figure 6.7c shows how the tracking error *E*track is decreased when using the modified initialisation scheme. *E*track is equal to zero, meaning there are no erroneous tracks computed, up to approximately *ξ* = 0*.*2 for the modified initialisation scheme. Additionally, at all values of *ξ* , the modified initialisation scheme performs better than the nearest neighbour initialisation scheme. This shows the advantage of the modified initialisation scheme in creating trajectories in flow with large particle displacements or high particle density.

# **6.4 Noise Reduction in Post-Processing Statistical Analysis**

Particle tracking velocimetry leads to a collection of tracks, *(***x***<sup>j</sup> (t))j*∈[1*,N*], from which turbulent statistics, such as the mean flow and velocity fluctuations, may be computed. Most of the desired quantities have in common that they require taking the derivative of the particle positions, which inevitably leads to noise amplification. In the Lagrangian framework, single particle (two-time) statistics such as velocity or acceleration auto-correlation functions are of great interest; they will be considered in Sect. 6.4.1. In the Eulerian framework, moments of velocity differences separated by a distance **r** (structure functions) are of great importance; these two particle statistics will be addressed in Sect. 6.4.2.

The method presented below seeks to obtain unbiased one- and two-point statistics of experimental signal derivatives without introducing any filtering. It is valid for any measured signal whose typical correlation scale is much larger than the noise correlation scale. While one aims to obtain the real signal *x*ˆ, the presence of noise *b* implies that one actually measures *x(t)* = ˆ*x* + *b*. For simplicity, we consider the case of a temporal signal *x(t)* that is centred, i.e., *x* = 0, and is obtained by considering *x(t)* − *x*, where · is an ensemble average.

The method is based on the temporal increment *dx* of the signal *x* over a time *dt* that we express as *dx* = *x(t* +*dt)*−*x(t)* = *dx*ˆ +*db*. Assuming that the increments of position and noise are uncorrelated, the position increment variance is written as , *(dx)*<sup>2</sup> - = , *(dx)*<sup>ˆ</sup> <sup>2</sup> - + , *(db)*<sup>2</sup> - . Introducing the velocity *v*ˆ and acceleration *a*ˆ through a second-order Taylor expansion *x(t* <sup>ˆ</sup> <sup>+</sup> *dt)* = ˆ*x(t)* + ˆ*v dt* + ˆ*a dt*2*/*<sup>2</sup> <sup>+</sup> *o(dt*2*)*, one obtains:

$$
\left\langle \left( dx \right)^{2} \right\rangle = \left\langle \left( db \right)^{2} \right\rangle + \left\langle \hat{v}^{2} \right\rangle dt^{2} + \left\langle \hat{a}.\hat{v} \right\rangle dt^{3} + o(dt^{3}),
\tag{6.5}
$$

where , *(db)*<sup>2</sup> - = 2 , *b*2 in the case of a white noise [24, 42]. In Eq. (6.5) , *(dx)*<sup>2</sup> is a function of *dt* so that one can recover the value of the velocity variance ˆ*v*2 by calculating time increments of , *(dx)*<sup>2</sup> - *(dt)* over different values of *dt* followed by a simple polynomial fit in the form of Eq. (6.5). If the noise is coloured, , *(db)*<sup>2</sup> - = 2 , *b*2 - −2 *b(t)b(t* + *dt)* . In this case, the method requires the noise to be correlated over short times when compared to the signal correlation time. As a result, only the lowest values of , *(dx)*<sup>2</sup> - *(dt)* are biased by *b(t)b(t* + *dt)* and a fit still successfully allows for the evaluation of the root mean square (rms) velocity, *<sup>v</sup>*ˆ <sup>=</sup> ˆ*v*2. For an experimentally measured signal *x*, equally spaced at an acquisition rate *fs*, the minimal value of *dt* is 1*/fs*; we can then obtain the values of *dx* for different values of *dt* = *n/fs*. For this method, a value of the acquisition rate *fs* higher than usual is required, in order to be able to access derivatives of the signal without aliasing error.

We can extend the previous calculation to higher order derivative statistics by considering higher order increments. The second-order increment *<sup>d</sup>*2*<sup>x</sup>* <sup>=</sup> *x(t* <sup>+</sup> *dt)* <sup>+</sup> *x(t* <sup>−</sup> *dt)* <sup>−</sup> <sup>2</sup>*x(t)*, which is related to the acceleration variance ˆ*a*2 here, yields, for instance:

$$
\left\langle (d^2x)^2 \right\rangle = \left\langle (d^2b)^2 \right\rangle + \langle \hat{a}^2 \rangle dt^4 + \frac{1}{6} \left\langle \hat{a} \frac{d^2\hat{a}}{dt^2} \right\rangle dt^6 + o(dt^6), \tag{6.6}
$$

where , *(d*2*b)*<sup>2</sup> - = 6 , *b*2 in the case of a white noise [24, 42], but otherwise introduces additional noise correlation terms which are functions of *dt*.

# *6.4.1 Lagrangian Auto-Correlation Functions*

The approach developed above is not restricted to one-time statistics of the signal derivatives but can be generalised to estimate the noiseless first- and second-order derivative auto-correlation functions of the signal *Cv*ˆ*v*<sup>ˆ</sup> = *v(t)* ˆ *v(t* ˆ + *τ )* and *Ca*ˆ*a*<sup>ˆ</sup> <sup>=</sup> *a(t)* ˆ *a(t* ˆ + *τ )* . This is done by considering the correlations of first- and secondorder increments *dx(t)dx(t* + *τ )* and , *<sup>d</sup>*2*x(t)d*2*x(t* <sup>+</sup> *τ )* which are functions of *dt* and *τ* . Noiseless velocity and acceleration correlation functions are estimated, respectively, for each time lag *τ* using a polynomial fit of the signal time increment *dt* with the following expressions:

$$\begin{cases} \begin{aligned} \mathcal{C}\_{d\ge d\ge}(\mathfrak{r},dt) &= \mathcal{C}\_{\hat{v}\hat{v}}(\mathfrak{r})dt^{2} + \frac{1}{2} \left( \mathcal{C}\_{\hat{u}\hat{u}}(\mathfrak{r}) + \mathcal{C}\_{\hat{u}\hat{v}}(\mathfrak{r}) \right) dt^{3} + \\ &+ \mathcal{C}\_{d\ge d\ge}(\mathfrak{r},dt) + o(dt^{3}) \end{aligned} \end{cases}$$

$$\begin{aligned} \mathcal{C}\_{d^{2}\ge d^{2}\ge}(\mathfrak{r},dt) &= \mathcal{C}\_{\hat{u}\hat{u}}(\mathfrak{r})dt^{4} + \frac{1}{12} \left( \mathcal{C}\_{\hat{u}(d^{2}\hat{u}/d\hat{u}^{2})}(\mathfrak{r}) + \mathcal{C}\_{(d^{2}\hat{u}/d\hat{u}^{2})\hat{u}}(\mathfrak{r}) \right) dt^{6} + \\ &+ \mathcal{C}\_{d^{2}bd^{2}b}(\mathfrak{r},dt) + o(dt^{6}), \end{aligned} \tag{6.7}$$

where *Cfg* = *f (t)g(t* + *τ )* is a cross-correlation function. It can be noted that the case of the rms values corresponds to *<sup>τ</sup>* <sup>=</sup> 0 and it is noted that *(dx)*2 and *(d*2*x)*2 are functions of *dt*. In the previous expressions and in the case of a white noise, we can write auto-correlation functions of the first- and second-order increments of the noise. With the signal sampled at a frequency *fs*, one has *dt* = *n/fs* and *τ* = *m/fs*. The correlation functions of the digitised noise increments are written as:

$$\begin{cases} \mathcal{C}\_{dbdb} \left( \mathfrak{r} = \frac{m}{f\_s}, dt = \frac{n}{f\_s} \right) \\\\ \mathcal{C}\_{d^2bd^2b} \left( \mathfrak{r} = \frac{m}{f\_s}, dt = \frac{n}{f\_s} \right) = \left\langle b^2 \right\rangle (6\delta\_{m,0} - 4\delta\_{m,n} + \delta\_{m,2n}), \end{cases} (6.8)$$

where *δm,n* is the Kronecker symbol. For both derivatives, the white noise magnitude in the first-order derivative auto-correlation functions is the highest for *τ* = 0 and is an additive term. The noise then yields a negative term for *m* = *n*. In the case of second-order derivatives (for acceleration in the case of Lagrangian tracks), the noise magnitude has a larger weight and the noise also contributes to a third time point of the function (*m* = 2*n*) with a positive term of smaller amplitude. Considering white noise terms up to *dt*6, all other values of *τ* will directly yield the function without noise.

### **Results**

The method has been applied to the material particle trajectories from Ref. [43]. It has been tested successfully for different particle diameters (from 6 to 24 mm), Reynolds numbers (350 *< Reλ <* 520), and two density ratios (0.9 and 1.14), as well as for neutrally buoyant particles from Ref. [44]. We will focus only on the case of particles 6 mm in diameter and of density ratio 1.14 at a Reynolds number *Reλ* = 520 in this example. The position trajectories are obtained by stereo-matching of successive image pairs obtained, thanks to two cameras and ambient lighting. The particles appear as large, bright discs on a uniform dark background which yields sub-pixel noise for the trajectories (the apparent particle diameter is about 20 pixels) and is not correlated with the particle position as the background is uniform (nor with its velocity as the exposure time is short enough to fix the particles on the images). In practical situations, the presence of sub-pixel displacements can lead to a short-time correlation of the noise, typically over a few frames.

Figure 6.8 shows the evolution of , *(dx)*<sup>2</sup> and , *(d*2*x)*<sup>2</sup> with *dt*. A simple linear function of *dt*<sup>2</sup> is enough for , *(dx)*<sup>2</sup> and a sixth-order one suits better , *(d*2*x)*<sup>2</sup> - . The first points of , *(d*2*x)*<sup>2</sup> do not follow Eq. (6.6), which may be due to the fact that we are not dealing with a purely white noise as will be shown in Fig. 6.9b. Using the estimated values of the rms acceleration, *a* , and *(d*2*b)*2, we can define a noise-to-signal ratio *b f* 2 *<sup>s</sup> /a* <sup>=</sup> <sup>11</sup>*.*9, where we have defined *<sup>b</sup>* <sup>=</sup> *(d*2*b)*2*/*6 by analogy with the white noise case. When considering the noise weight on the velocity signals, we of course find a much smaller magnitude *b fs/v* = 0*.*14 as it is only a first-order derivative (*v* being the rms of the velocity estimated with this method).

**Fig. 6.8** (**a**) Evolution of , *(dx)*<sup>2</sup> with *(dt/τa)*2, where *τa* <sup>=</sup> <sup>8</sup>*.*1 ms is the particle acceleration time scale (integral of the positive part of the particle acceleration auto-correlation function). The dashed line is a linear fit over the range 0 *< dt/τa* <sup>≤</sup> <sup>0</sup>*.*25. (**b**) Evolution of , *(d*2*x)*<sup>2</sup> with *dt/τa*. The dashed-dotted and dashed lines are fourth and sixth order fits (*<sup>α</sup>* <sup>+</sup> *β(dt/τa)*<sup>4</sup> resp. *<sup>α</sup>* <sup>+</sup>*β(dt/τa)*<sup>4</sup> <sup>+</sup>*γ (dt/τa)*6) over the range 0 *< dt/τa* <sup>≤</sup> <sup>0</sup>*.*62. The insets are zooms on low values of *dt/τa*

**Fig. 6.9** (**a**) Auto-correlation functions of the velocity or acceleration (**b**) estimated from the proposed method (dashed line) and directly computed by differentiating the position signal obtained by PTV (continuous line). The insets are zooms on the low values of *τ* . The fit ranges used to obtain the functions are the same that used in Fig. 6.8. The dashed-dotted line in figure (**b**) is the correlation estimated from filtered trajectories using a Gaussian kernel *<sup>K</sup>* <sup>=</sup> *Aw* exp*(*−*t*2*/*2*w*2*)*, where *w* = 12 points and *Aw* is a normalisation factor

Figure 6.9 shows the auto-correlation function of both the velocity and acceleration estimated with the proposed method, compared to the raw functions. With the low level of noise in this configuration, the velocity is almost unbiased and both functions are indistinguishable except for the first points of the raw function that are offset by the noise. On the second-order derivative, it can be observed in Fig. 6.9b that the raw acceleration auto-correlation function is biased for more than the three first points only (see inset). This is because the noise is not white but has a short correlation time compared to the signal. Combined with the finite duration of the trajectories, the raw correlation function is noisy over the whole range of time lags *τ* . This curve is plotted together with the one estimated with the method, fitting the coefficient up to *dt* = 5 ms which corresponds to 30% correlation loss in acceleration signals (same range as in Fig. 6.8b, but the precise choice is not critical). Although the signal-to-noise ratio is poor, the estimated correlation function seems to be following the median line between the peaks caused by noise and crosses zero at the location that seems to be indicated by the raw function. It is also close to the auto-correlation function from Ref. [43], estimated by filtering the data with a Gaussian kernel *<sup>K</sup>* <sup>=</sup> *Aw* exp*(*−*t*2*/*2*w*2*)* (with *<sup>w</sup>* <sup>=</sup> 12 points and a compact support of width 2*w*, *Aw* is a normalisation factor). It should be stressed that the value *w* = 12 was chosen arbitrarily as a compromise between suppressing oscillations at small lags without altering too much the shape of the function at larger lags.

With the new method, we compute an acceleration time scale *τa* = 8*.*1 ms and an acceleration magnitude *<sup>a</sup>* <sup>=</sup> <sup>12</sup>*.*4 mmss−2, which is close to the values *τa* <sup>=</sup> <sup>8</sup>*.*8 ms and *<sup>a</sup>* <sup>=</sup> <sup>12</sup>*.*9 mmss−<sup>2</sup> found for the filtered data [43]. However, in the latter case, the value of *a* depends strongly on the choice of the filter width *w*, so that one usually estimates *a* by computing it for different filter widths which can then allow to extrapolate a best estimate value (as introduced in [5]).

### **Discussion**

The present de-noising method estimates moments and auto-correlation functions of experimental signal derivatives. This method relies on two main assumptions:


We have tested the method in the context of Lagrangian particle tracks in turbulence for which the noise is correlated on times much shorter than the signal, considering both first- and second-order derivatives of a time dependent signal. The results are in good agreement to what is obtained by classical filtering processes which require a long bias study specific to the data type [5, 45], and we believe them to be more accurate. The method avoids subjective tuning of the filter width and choice of filter type while yielding unbiased quantities by requiring data fits in an appropriate range. While the fit range is still an adjustable parameter, we observed its impact on the results to be smaller than when filtering the data. Another advantage of the method is an easy access to the noise magnitude. While building a new experimental setup, one can gather just enough statistics to converge second-order moments to estimate the noise magnitude and try and improve the setup iteratively.

# *6.4.2 Eulerian Structure Functions*

### **Method**

The method presented above can be extended to compute Eulerian statistics, such as structure functions, from the collection of tracks (that can be two-frame displacement vectors in PIV). From particle positions **x**, which are measured with some noise **b** (**x** = **x**ˆ + **b**, where **x**ˆ are the actual positions), we define a 3D Lagrangian displacement field between two consecutive images taken at instants *t* and *t* + *dt* is then *d***x** = **x***(t* + *dt)* − **x***(t)* = *d***x**ˆ + *d***b**. This displacement field can be conditioned on a Cartesian grid so that its first moment

$$
\langle d\mathbf{x} \rangle = \left\langle \mathbf{\hat{v}} \right\rangle dt + \left\langle \mathbf{b} \right\rangle + o(dt^2) \tag{6.9}
$$

is computed in each bin of the grid to compute the mean flow **v**ˆ . We then compute the centred second-order moment of the displacement field

$$
\left\langle (d\mathbf{x} - \langle d\mathbf{x} \rangle)^2 \right\rangle = \left\langle \hat{\mathbf{v}}^2 \right\rangle dt^2 + 2 \left\langle \mathbf{b}^2 \right\rangle + \left\langle \hat{\mathbf{a}}' \cdot \hat{\mathbf{v}}' \right\rangle dt^3 + o(dt^3), \tag{6.10}
$$

where the prime stands for fluctuating quantities. Note that this formula is easily extended to centred cross-component second-order moments which are linked to the components of Reynolds stress tensor in each point of the grid.

The de-noising strategy is applied to data obtained from a pair of images taken with standard PIV cameras, one experimental set corresponds to a single value of *dt*. The moments *d***x** and , *(d***x***)*<sup>2</sup> - = , *(d***<sup>x</sup>** <sup>−</sup> *d***x***)*<sup>2</sup> are then calculated for multiple experimental sets where images of the particles in the flow are collected at increasing values of *dt*. When the evolution of , *(d***x***)*<sup>2</sup> with *dt* is fitted by a polynomial of the form *<sup>c</sup>*1*dt*<sup>2</sup> <sup>+</sup> *<sup>c</sup>*<sup>2</sup> in each bin, the leading coefficient is the field , **v**ˆ 2 - . The third-order correction is negligible because dimensional analysis gives , **v**ˆ 2 - */* , **a**ˆ · **v**ˆ *τη* <sup>∼</sup> *Reλ*, where *τη* <sup>=</sup> <sup>√</sup>*ν/ε* is the dissipative time and *Reλ* is the Reynolds number at the Taylor length scale. In turbulent flows, , **a**ˆ · **v**ˆ is well approximated by the dissipation rate *ε*. Taking *dt* smaller than the dissipative time ensures that the displacement field variance is well approximated. The advantage of this method is that it uses all the measurements taken at different values of *dt* without having to choose any particular *dt*, as would be done in a classical PIV experiment. And unlike PIV, there is no filtering of the data in the form of windowing.

This method can be extended to higher order moments of the displacement field, as well as to recover increment statistics, for example, the longitudinal secondorder structure function of the velocity (*S*ˆ 2 = , [*(***v**ˆ*(***x** + **r***)* − **v**ˆ*(***x***))* · **e***r*] 2 - , with **<sup>e</sup>***<sup>r</sup>* <sup>=</sup> *<sup>d</sup>***x***/*|*d***x**|), by fitting the evolution of , [*(d***x***(***x** + **r***)* − *d***x***(***x***))* · **e***r*] 2 with a polynomial *S*ˆ <sup>2</sup>*(*|**r**|*)dt*<sup>2</sup> <sup>+</sup> *<sup>c</sup>*2:

$$
\left\langle \left[ (d\mathbf{x}(\mathbf{x}+\mathbf{r}) - d\mathbf{x}(\mathbf{x})) \cdot \mathbf{e}\_r \right]^2 \right\rangle = \left\langle \left[ (d\mathbf{b}(\mathbf{x}+\mathbf{r}) - d\mathbf{b}(\mathbf{x})) \cdot \mathbf{e}\_r \right]^2 \right\rangle + 
$$

$$
+ \left\langle \left[ (\hat{\mathbf{v}}(\mathbf{x}+\mathbf{r}) - -\hat{\mathbf{v}}(\mathbf{x})) \cdot \mathbf{e}\_r \right]^2 \right\rangle dr^2 + \cdots + \tag{6.11}
$$

$$
+ \left[ \left[ (\hat{\mathbf{v}}(\mathbf{x}+\mathbf{r}) - \hat{\mathbf{v}}(\mathbf{x})) \cdot \mathbf{e}\_r \right] \right] \left( \hat{\mathbf{a}}(\mathbf{x}+\mathbf{r}) - \hat{\mathbf{a}}(\mathbf{x})) \cdot \mathbf{e}\_r \right] \Big| d\mathbf{t}^3 + o(dt^3).
$$

Note that the structure function computation does not require the conversion of displacements to Eulerian coordinates, but rather to bin the inter-particle distance |**r**|. This means that measuring structure functions is possible at arbitrarily small separations |**r**|, without any requirements on the Eulerian spatial binning. This method requires only a statistical convergence in the number of particles *N* at a certain range of inter-particle distance (a number that is proportional to *N*2). This represents a significant advantage over methods for structure function computation that carry an associated increase in measurement noise at small separations |**r**|.

The second-order moment of the velocity fluctuations and second-order structure function are presented here as examples of what the expansion of statistical moments, combined with data collected at different *dt* can achieve. Higher order moments for the velocity fluctuations and higher order structure functions can be easily computed by this method with reduced noise, although they will contain residual noise from the computation of lower order moments (*o(dt*3*)* terms above).

### **Results**

Particle displacements measured in a homogeneous, isotropic turbulence experiment [33, 46] are used to demonstrate the validity and accuracy of the method. Two CMOS cameras with a resolution of 2048×1088 pixels were used in a stereoscopic arrangement. Images were collected in double-frame mode, separated by a time-step *dt* from 0.05*τη* to 0.2*τη*. Alternatively, using a very fast acquisition/illumination rate using high-speed camera and kHz pulsed lasers allows us to collect a single image sequence and then take a variable *dt* in the analysis by skipping an increasing number of images in the sequence. Measurements were obtained in a volume of <sup>10</sup> <sup>×</sup> <sup>10</sup> <sup>×</sup> 1 cm3 using a Nd:YAG laser. For each experiment, approximately 10,000

**Fig. 6.10** (**a**) Longitudinal second-order structure functions of the raw displacement field against the separation |**r**| normalised by the Kolmogorov length scale *η* for different values of *dti* equally spaced from 0.05 to 0.2 *τη* at *Reλ* = 291. (**b**) Same quantities but plotted at a given separation |**r**| (indicated by the vertical dashed lines on (**a**); ascending order is for different values of *dt*), as a function of the inter-frame time-step value *dt*. The lines are fits of the form *<sup>c</sup>*1*dt*<sup>2</sup> <sup>+</sup> *<sup>c</sup>*<sup>2</sup>

pairs of image sets per time-step (each set providing the 3D position of several hundred particles in the flow) were collected to ensure statistical convergence.

The different results for the longitudinal second-order structure function of *d***x** (Fig. 6.10a) at different time-steps, *dt*, show a strong dependency on how the noise affects the signal for different values of *dt*. The displacement correlation plotted at fixed separations (five different values) are all quadratic in *dt* (Fig. 6.10b), showing that this approximation is robust for different levels of measurement noise. The trend *<sup>c</sup>*1*dt*<sup>2</sup> <sup>+</sup> *<sup>c</sup>*<sup>2</sup> from Eq. (6.11) is followed at different values of the separation <sup>|</sup>**r**|, with the positive values of *c*<sup>2</sup> being proportional to the variance of the noise (Eq. (6.10)). The quadratic coefficient *c*<sup>1</sup> is the second-order function of the velocity with the noise removed. The presence of the inertial range is highlighted by the 2/3 slope in Fig. 6.11a, over approximately one decade, in good agreement with the prediction of Kolmogorov for the second-order structure function in homogeneous isotropic turbulence (*S*ˆ <sup>2</sup> <sup>∼</sup> *<sup>ε</sup>*2*/*3|**r**<sup>|</sup> <sup>2</sup>*/*3) [47]. Turbulence variables extracted from velocity measurements would be subject to a significant level of uncertainty and inaccuracy (seen in Fig. 6.10a) if the noise were not removed by the method proposed here.

Figure 6.11b shows the estimation of the dissipation rate of turbulent kinetic energy, *εr* = *S*ˆ 2 3*/*2 */*|**r**| for three different Reynolds numbers studied in this experimental implementation of this de-noising method. The plateau values obtained confirm the presence of the inertial range and their values correspond to the ensemble average of the local dissipation rate. The estimations of *ε*, as well as *u* (spatial average of the fluctuating velocity map), for different Reynolds numbers compare well with those in [33], obtained by 2D3C PIV, confirming the accuracy of the method. In fact, the values of *u* and *ε* are slightly lower than those obtained by PIV. This discrepancy can be explained, qualitatively, based on the physics of the measurements and the effect of the noise on these metrics when it is not eliminated from the displacement measurements. Previous velocity measurements in the same

**Fig. 6.11** (**a**) Second-order structure functions of the velocity extracted with the proposed method for different *Reλ*. The black dashed line corresponds to a power law of exponent 2*/*3. (**b**) Energy dissipation rate estimated as *εr* = *S*ˆ 2 3*/*2 */*|**r**|

experiment, conducted by traditional PIV [33], corresponded well with the actual velocity measured with this technique, but with the noise variance retained. The structure function (and hence *ε*) measured with traditional techniques was also subject to this erroneous increase in the value due to the contribution of noise , to the computation of this statistical value. Equation (6.11) shows that the term [*(d***b***(***x** + **r***)* − *d***b***(***x***))* · **er**] 2 will increase the value *ε* due to noise. To determine the importance of this term, it is expanded into 4 , **b**2 - *(*1−*Cb(*|**r**|*))*, where *Cb(*|**r**|*)* is the noise spatial correlation, bounded between *(*−1*,* 1*)*. Regardless of the value of *Cb*, it will erroneously increase the value of the structure function yielding a higher value of *ε*. As the value of *Cb* depends on spatial separation, it will not uniformly raise it for all values of |**r**| and the slope of the structure function may evolve with separation, making the value of *ε* noisier.

### **Discussion**

The comparison of the flow statistics with a previous 2D3C PIV study [33] allows for the validation of the proposed method. In fact, the measurements show better results, with no need to tune arbitrary filtering parameters to remove noise (the interrogation window size, for instance). The only parameters that must be chosen for the method proposed here are the different values of *dt* that are accessible for a given flow and camera/illumination available, the form of the fit function, and finally the binning in space to compute the Eulerian average and fluctuating velocities (if so desired), and in separation distance to compute the structure function.

The values of *dt* are subject to two limitations. They must be high enough so that particles move more than the measurement error while keeping the large displacements associated with highest *dt* from interfering with the ability of the particle tracking algorithm to identify individual particles [48]. As mentioned, a maximum value of *dt τη* ensures that the third-order correction remains small, , **v**ˆ 2 - 0 , **a**ˆ · **v**ˆ *τη* ∼ *Reλ*. This was verified in the present experimental setup and we found this correction to be negligible when compared to the second-order term. This was also the case for the structure function provided the separation lies in the inertial range |**r**| *η*. In such cases, the best agreement between fit functions and the data overall was found when using a quadratic function of *dt*. As for the number of time-step values needed, the value of *ε* when using only the three larger values of *dt* was only 5% lower than when using all five datasets. Using only the lowest value and largest values of *dt* allowed for a simple calculation of *ε* that was only 2% higher than with the full experimental set.

The displacement vector field obtained from particle tracking in this multiple time-step method is computed in a Lagrangian frame of reference. To compute the values of , *(d***x***)*<sup>2</sup> against *dt*, the displacement field must be binned into a spatial grid, converting it to an Eulerian frame of reference. Although the number of particles per image, or Eulerian grid cell, is relatively small in these PTV images, the velocity is estimated independently for each particle pair. Thus, the statistical convergence in the method is reached relatively quickly (without the need for a very large number of image pairs). The computation of the structure functions highlights this advantage. As pointed out above, the structure function could in principle be computed to arbitrarily small separation between particles. However, great care should be taken in doing so because: (1) it is difficult to achieve statistical convergence in finding particles with small separations; (2) the second and thirdorder terms in Eq. (6.11) are of the same order of magnitude when the separation is in the dissipation range (|**r**| ∼ 10*η*). These reasons explain why an increase of the structure functions at small separations is observed in Fig. 6.10a.

# **6.5 Conclusions**

We have presented recent developments in the characterisation of flows in laboratory experiments using particle tracking velocimetry, one of the most accurate techniques in experimental fluid mechanics. By tracking simultaneously hundreds of particles in 3D, it allows the experimentalist to address crucial questions related, for instance, to mixing and transport properties of flows.

The main aspects of particle tracking are addressed. A new optical calibration procedure based on a plane-by-plane transformation, without any camera model, is presented. It is at least as precise as Tsai model though more versatile as it naturally accounts for optical distortions and can be used in very complex configurations (such as Scheimpflug arrangement, for instance). Tracking algorithms are at the heart of PTV, and the practical implementation of two of their recent development is described: shadow particle tracking velocimetry using parallel light and trajectory reconstruction based on a four frames best estimate method (4BE) with improved initialisation. While the former was developed originally to access the size, orientation, or shape of the tracked particles, the latter is a natural extension of classical PTV setup and can be easily implemented as an add-on of any existing code.

A drawback of particle imaging techniques, as opposed to direct methods such as hot wire anemometry, is that they rely on measuring particle displacement. They are inevitably subject to noise amplification when computing spatial or temporal derivatives. We present recent developments addressing this important question which are based on computing statistics of the particle displacement with increasing time lag. They do not require any kind of filtering, and allow for the estimation of noiseless statistical quantities both in the Lagrangian framework (velocity and acceleration time correlation functions) and in the Eulerian framework (statistics of spatial velocity increments).

We conclude by mentioning that experimental techniques in fluid mechanics are continuously being improved, as new ideas combined with technological advances increase the resolution and the range of existing methods: for instance, cameras are becoming ever faster and sensors better resolved; an important breakthrough in high-resolution optical tracking is expected in the coming years, thanks to FPGA (field programmable gate array) technology which allows to process images onboard and hence to increase the effective data rate. Such a technique has been pioneered by Chan et al. [49] and further developed by Kreizer et al. [50] to achieve on-board particle detection, allowing to directly stream particle positions to the hard drive of the control computer, avoiding the usual memory limitation of high-speed cameras, and making the recording duration virtually unlimited.

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 7 Numerical Simulations of Active Brownian Particles**

**Agnese Callegari and Giovanni Volpe**

# **7.1 Introduction**

Active particles differ from their passive counterparts for their ability to propel themselves. In Fig. 7.1 examples of active particles are given, classified with respect to their size and their propulsion speed. Living microorganisms propel themselves for different purposes such as finding food, escaping from predators or other dangers, and patrolling a territory [1]. Inspired by these microorganisms, researchers have recently developed several artificial particles capable of self-propelled motion activated by localised light, concentration, temperature gradients [2]. Despite the variety of possible self-propulsion mechanisms, we can identify some key features to describe the motion of a self-propelling micro- or nanosized particle: (1) *directionality* over a characteristic time interval, (2) *orientational noise*, and (3) *absence of inertia*. We note that in the case of living organisms the self-propulsion mechanism often implies a deformation in their shape; however, for simplicity, this aspect will not be taken into account in this chapter. Even if active particles are obviously three-dimensional and their motion also happens in 3D, here we will mainly consider motion in two dimensions, as in many real situations the motion of active particles is in a *quasi*-2D environment; for example, the case of motile bacteria moving above the lower horizontal surface of a sample slide. However, we will provide details on how to handle the motion of active particles in 3D as well. It is worth noting that in some cases 2D-confinement on active particles can give rise

Bilkent University and UNAM, Ankara, Turkey e-mail: callegari@fen.bilkent.edu.tr

A. Callegari (-)

G. Volpe Gothenburg University, Gothenburg, Sweden

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_7

**Fig. 7.1** Examples of active particles. Both living organisms and man-made particles are capable of propelling themselves. Here are represented biological and artificial particles of micro- and nanoscopic size. Their speed ranges from a few μm s−<sup>1</sup> to mm s−1. Adapted from Bechinger et al. [2]

to unexpected features that are not present in the case the active particles are moving in the bulk of a solution, i.e., full 3D motion with no confinement [3].

# **7.2 Passive Brownian Motion**

Typical active particles are motile bacteria or artificial self-propelling microparticles, which perform their motion in a liquid environment. Since they are immersed in a fluid, active particles are subject to viscous force, always opposite to their velocity, and to thermal noise that is generated by the molecules constituting the fluid, which, because of the microscopic size of the particles, results in a non-negligible effect. Therefore, before entering into the details of how various models can describe the motion of active particles, we will discuss how the *passive* Brownian motion of a spherical particle can be described using Langevin equations and simulated using finite-difference equations [4].

Numerical simulations of Brownian dynamics date back to the 1970s and 1980s. Seminal works in this field are [5–7]. A comprehensive review is [8] and fundamental reference books are [9–11]. A reference for the numerical methods is [12], where the method with finite differences, better known as Euler–Maruyama scheme, is explained among other schemes. Here, we report the essential basics of passive Brownian dynamics and we invite the reader to refer to the milestone works cited above for a deeper insight and a comprehensive review.

Let us assume we have a spherical microscopic particle (for example, a transparent silica particle of 3μm diameter) floating in a droplet of liquid solution (for example, a water solution) deposited on a microscopic glass slide. The mass of the particle is *<sup>m</sup>* <sup>≈</sup> <sup>10</sup>−<sup>14</sup> kg. If we observe it with a microscope, we will see that the particle moves erratically, hovering above the flat glass surface of a microscope slide. If we track the particle by recording its position at times regularly separated by a fixed time interval *t*, we will find that its translational motion is purely diffusive, with translation diffusion constant for each of the two main directions *D*<sup>t</sup> given by

$$D\_{\rm t} = \frac{k\_{\rm B}T}{\varkappa},\tag{7.1}$$

where *k*<sup>B</sup> is the Boltzmann's constant, *T* is the absolute temperature, and *γ*<sup>t</sup> is the friction coefficient of the particle for translational displacements (in the bulk of a liquid solution, *γ*<sup>t</sup> = 6*πηR*, where *η* is the viscosity of the fluid and *R* the radius of the particle). This equation is the simplest expression of Einstein's fluctuation– dissipation relation.

In the case of a homogeneous spherical particle with a perfectly smooth surface, it is not easy to experimentally detect the particle orientation. However, if we manage to measure the particle orientation, we will find that, in addition to an erratic translational motion, also the orientation of the particle changes randomly. If we can record the orientation, we will see that also the rotational motion is purely diffusive, this time with a different constant, the rotational diffusion constant *D*<sup>r</sup> given by

$$D\_{\rm f} = \frac{k\_{\rm B}T}{\varkappa\_{\rm f}},\tag{7.2}$$

where *<sup>γ</sup>*<sup>r</sup> <sup>=</sup> <sup>8</sup>*πηR*<sup>3</sup> is the rotational friction coefficient of the particle.

The cause of these translational and rotational erratic motions lies in the interactions of the suspended colloidal particle with the molecules constituting the fluids, which are affected by the temperature and, at equilibrium, present a velocity partition function distributed according to Maxwell's distribution [13]. Because of the collisions with the fluid molecules, the particle experiences a *force* and a *torque* that perturb its motion (*thermal noise*).

The translational dynamics of a particle in a fluid environment is described by the Langevin equation:

$$m\mathbf{a} = -\boldsymbol{\chi}\_{\rm t}\mathbf{v} + \mathbf{F}\_{\rm th},\tag{7.3}$$

where the term −*γ*t**v** is the viscous friction force of the fluid, and **F**th is the stochastic thermal force, which has zero average and variance 2 *k*B*T γ*t.

Because of the tiny mass of a microscopic particle, often inertia can be neglected. In fact, the characteristic time needed to forget the inertial effects is the relaxation time *τ*rel = *m/γ*t, which increases with the mass of the particle *m*, and decreases with the friction coefficient *γ*t. For microscopic particles like the prototype silica particle of 2μm diameter, the relaxation time *τ*rel is of the order of magnitude of 0.1μs. Such relaxation time is several orders of magnitude below the time intervals typically sampled in experiments (for example, the time interval between two frames in an acquisition via a standard CMOS camera is of the order of some milliseconds). Therefore, Eq. (7.3) can be simplified to the overdamped Langevin equation:

$$
\gamma\_\text{lt} \mathbf{v} = \mathbf{F}\_\text{th},
\tag{7.4}
$$

where the inertial term *m***a** on the right side of Eq. (7.3) has been dropped. In all systems where *t τ*rel, Eq. (7.4) is sufficient to capture the relevant measurable physical features. In fact, it is possible to demonstrate that the solution of Eq. (7.3) converges to Eq. (7.4) in the limit *m* → 0 [14].

Often, Langevin equation (7.4) is rewritten as:

$$d\mathbf{r} = \sqrt{2D\_t} \, d\mathbf{W},\tag{7.5}$$

where *dW* is the derivative of a Wiener process, with zero average and variance 1 [4, 15]. The simplest, though effective, way to a numerical solution of Eq. (7.5) is to use a finite-difference approach. In 2D, the variables in Eq. (7.5) can be explicitly written as:

$$\begin{cases} d\mathbf{x} = \sqrt{2D\_t} \, dW\_\mathbf{x} \\ d\mathbf{y} = \sqrt{2D\_t} \, dW\_\mathbf{y} \, \text{} \end{cases} \tag{7.6}$$

By writing the velocities as *vx* = *x/t* and *vy* = *y/t*, Eq. (7.6) takes the form [4, 15]

$$\begin{cases} \Delta \mathbf{x} = \sqrt{2D\_t \Delta t} \ W\_{\mathbf{x}} \\\\ \Delta \mathbf{y} = \sqrt{2D\_t \Delta t} \ W\_{\mathbf{y}} \end{cases} \tag{7.7}$$

where *Wx* and *Wy* are realisations of independent stochastic processes with average 0 and standard deviation 1 [4, 15]. By writing explicitly *x* = *xn*+<sup>1</sup> − *xn* and *y* = *yn*+<sup>1</sup> − *yn*, we are led to the finite-difference equation

$$\begin{cases} \mathbf{x}\_{n+1} = \mathbf{x}\_n + \sqrt{2D\_\mathbf{t} \Delta t} \ W\_{\mathbf{x},n} \\ \mathbf{y}\_{n+1} = \mathbf{y}\_n + \sqrt{2D\_\mathbf{t} \Delta t} \ W\_{\mathbf{y},n} \end{cases} \tag{7.8}$$

where, for a given *t*, we obtain the sequence {*xn, yn*} representing the trajectory of the passive Brownian particle in the plane.

# **7.3 Active Particles**

# *7.3.1 Active Brownian Motion*

One of the simplest models of active motion is *active Brownian motion*. Let us consider a spherical particle that self-propels with a constant speed *v* along a given internal orientation direction in 2D. Just like a passive Brownian particle, this particle is also affected by thermal noise, which affects both its translation and rotation. The configuration of the active Brownian particle is described by three variables: two spatial coordinates *x* and *y* for the position according to the lab reference frame, and one rotational coordinate *θ* for the orientation of the particle with respect to the lab reference frame [16]. The equations determining the dynamics are

$$\begin{cases} \dot{\boldsymbol{x}} = \boldsymbol{v} \; \cos \theta \; + \xi\_{\boldsymbol{x}} \\\\ \dot{\mathbf{y}} = \boldsymbol{v} \; \sin \theta \; + \xi\_{\boldsymbol{y}} \; . \end{cases} \tag{7.9}$$

The finite-difference equations relative to Eqs. (7.9) are therefore the following:

$$\begin{cases} \mathbf{x}\_{n+1} = \mathbf{x}\_n + v \begin{bmatrix} \cos \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{x},n} \end{bmatrix} \\\\ \mathbf{y}\_{n+1} = \mathbf{y}\_n + v \begin{bmatrix} \sin \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{y},n} \end{bmatrix} \\\\ \theta\_{n+1} = \theta\_n + \sqrt{2D\_t \Delta t} \ W\_{\theta,n} \end{cases} \tag{7.10}$$

Figure 7.2 depicts typical trajectories for an active Brownian particle with 3μm diameter with characteristic self-propulsion speed between *<sup>v</sup>* <sup>=</sup> <sup>0</sup>μm s−<sup>1</sup> (passive Brownian particle) and 12μm s−1.

**Fig. 7.2** From passive to active Brownian motion. (**a**) Passive Brownian motion (*<sup>v</sup>* <sup>=</sup> <sup>0</sup>μm s−1) and (**b**–**e**) active Brownian motion for increasing self-propulsion speeds (from *<sup>v</sup>* <sup>=</sup> <sup>3</sup>μm s−<sup>1</sup> to *<sup>v</sup>* <sup>=</sup> <sup>12</sup>μm s−1). All trajectories last for the same amount of time

An important quantity for characterising the motion of microscopic systems is the mean square displacement (MSD). Considering a 2D motion, the MSD of a particle as a function of the time lapse *t* is

$$
\Delta^2(t) = \left\langle \left( \mathbf{x}(t\_1 + t) - \mathbf{x}(t\_1) \right)^2 + \left( \mathbf{y}(t\_1 + t) - \mathbf{y}(t\_1) \right)^2 \right\rangle,\tag{7.11}
$$

where the average is often performed over time.1 The MSD over a given time *t* represents, therefore, the average quadratic displacement from the position the particle had a time *t* before. From the MSD, we can gain a lot of insights about the dynamics of a system.

The MSD of an active Brownian particle, i.e., for a trajectory governed by Eq. (7.9), is

$$
\Delta^2(t) = \left(4D\_\text{l} + v^2 t\_\text{f}\right)t + \frac{v^2 t\_\text{r}^2}{2} \left(e^{-\frac{2t}{t\_\text{f}}} - 1\right),
\tag{7.12}
$$

where *<sup>t</sup>*<sup>r</sup> <sup>=</sup> *<sup>D</sup>*−<sup>1</sup> <sup>r</sup> is the characteristic time scale for the rotational diffusion, that, for a prototype particle with 3μm diameter, is of about 20 s. If we write explicitly the expression above for the limits *t t*<sup>r</sup> and *t t*r, we have

$$\Delta^2(\mathbf{r}) = \begin{cases} 4\ \,^0D\_\mathbf{r}\,t + v^2\,^2 & t \ll t\_\mathbf{r} \\ \left(4\ \,^0D\_\mathbf{r} + v^2\,^t\mathbf{r}\right)t & t \gg t\_\mathbf{r} \end{cases} \tag{7.13}$$

For a prototype particle with 3μm diameter, the translational diffusion coefficient *<sup>D</sup>*<sup>t</sup> <sup>≈</sup> <sup>0</sup>*.*3μm2 <sup>s</sup>−1. If we consider an active particle with speed *<sup>v</sup>* <sup>=</sup> <sup>5</sup>μm s−1, then *<sup>v</sup>*<sup>2</sup> <sup>=</sup> <sup>25</sup>μm<sup>2</sup> <sup>s</sup>−2. If we consider the case *<sup>t</sup>* <sup>=</sup> <sup>0</sup>*.*01*t*<sup>r</sup> <sup>=</sup> <sup>0</sup>*.*2 s *<sup>t</sup>*r, then the diffusive contribution to the MSD (4 *<sup>D</sup>*<sup>t</sup> *<sup>t</sup>* <sup>≈</sup> <sup>0</sup>*.*1μm2) is an order of magnitude

<sup>1</sup>In ergodic systems, the time average of a quantity coincides with the ensemble average, i.e., the average over all possible configurations, which, in this case, are all possible realisation of a trajectory.

smaller than the velocity contribution to the MSD (*v*<sup>2</sup> *<sup>t</sup>*<sup>2</sup> <sup>≈</sup> <sup>1</sup>μm2). The two contributions become comparable, when *t* ≈ 0*.*001*t*<sup>r</sup> = 0*.*02 s. For smaller values of the time lapse *t*, the diffusive contribution prevails. However, if the reference active speed is faster than the *<sup>v</sup>* <sup>=</sup> <sup>5</sup>μm s−1, as it happens in many cases (for example, compare typical speeds given in Fig. 7.1), and the size is bigger than 3μm diameter (again, compare typical swimmers sizes given in Fig. 7.1), then the range for the time lapse where the velocity contribution prevails starts from few hundredths of second.

Therefore, the experimentally observed dependence on the time interval *t* is essentially ballistic (i.e., quadratic, <sup>∝</sup> *<sup>v</sup>*2*t*2) for time scales smaller than the rotational diffusion time scale *t*r, and diffusive (i.e., linear, ∝ *t*) for time scales much longer than *t*r. In the latter case, the rotational diffusion plays a role in the randomisation of the propulsion direction over long times, and acts as an effective enhancement of the diffusion proportional to *v*<sup>2</sup> *t*r, so that the long-term effective diffusion coefficient is

$$D\_{\rm eff} = D\_{\rm l} + \frac{v^2 \, t\_{\rm r}}{4} \,. \tag{7.14}$$

If there is no self-propulsion (i.e., *v* = 0), the dynamics is that of a passive Brownian particle and the MSD is linear at all time scales. Figure 7.3 represents the MSD corresponding to trajectories obtained from the numerical integration of Eq. (7.9), for different values of the self-propulsion velocity *v*.

It is an interesting exercise to calculate the MSD also for the cases of run-andtumble motion, chiral active Brownian motion, and active motion with Gaussian

noise reorientation mechanism, which will be described in the following subsections. Even though the microscopic locomotion mechanism differs at short time scales, the resulting MSDs present some universal features such as a ballistic behaviour at short time scales and an enhanced diffusive behaviour at long time scales.

# *7.3.2 Run-and-Tumble Motion*

In the case of living organisms like motile bacteria, the observed motion can be described as a sequence of rectilinear forward steps and, occasionally, a sudden stop in the motion followed by a reorientation and by rectilinear motion along the new direction. Such kind of motion is known as *run-and-tumble motion* [1]. During the "runs", the bacterium moves forward because of the rotational motion of its flagella, which are tangled in a bunch. During the "tumbles", one of the flagella changes rotation direction breaking the bunch and the bacterium reorients itself. After this reorientation, the flagella form again a bunch and a new run starts. Run-andtumble motion is a typical strategy of chemotactic organisms, i.e., organisms that calibrate their motion according to the presence or absence of determined chemical substances, usually an attractant or a repellent, often following the concentration gradient. The mechanism of run-and-tumble motion has been thoroughly studied in *E. coli* [17, 18] and various models have been developed to simulate it, from simple ones [19] to more complex ones [20].

In the absence of the chemical substance to which the chemotactic organism is responding, the reorientation events happen with a timing well-described by a Poisson process, characterised by the probability distribution:

$$P\_{\lambda}(N=n) = e^{-\lambda} \frac{\lambda^n}{n!},\tag{7.15}$$

where *N* is the number of events observed in the time interval *t*, *λ* is the average number of events expected in *t*, and *n* is a natural number. In this framework, the probability that tumbling happens is

$$P\_{\text{turble}} = 1 - P\_{\lambda}(N=0) = 1 - e^{-\lambda}.\tag{7.16}$$

If we want to describe the system dynamics in a finite-difference equations approach, we have to add another variable to the set *(x, y, θ )*, because we have to know if the bacterium is running or tumbling. Therefore, we add then the discrete variablethat keeps track of the status of the bacterium (1: run, 0: tumble), and each time step *t* has a probability *P*tumble to be set to 0 and probability *P*run = 1−*P*tumble to be set to 1. The set of finite-difference equations is then:

$$\begin{cases} \mathbf{x}\_{n+1} = \mathbf{x}\_n + \varrho\_n \ v \ \cos \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{x},n} \\\\ \mathbf{y}\_{n+1} = \mathbf{y}\_n + \varrho\_n \ v \ \sin \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{y},n} \\\\ \theta\_{n+1} = \theta\_n + (1 - \varrho\_n) \ \Delta \Theta\_{\text{tumble,n}} \\\\ \varrho\_{n+1} = 0 \text{ or } \mathbf{l} \qquad \text{with probability } 1 - e^{-\lambda} \text{ and } e^{-\lambda} \text{ each time step} \end{cases} \tag{7.17}$$

An example of the appearance of a *run-and-tumble* trajectory is given in Fig. 7.4b.

# *7.3.3 Chiral Active Brownian Motion*

It is not uncommon to observe that bacteria explore their environment by moving in circles. For example, *E. coli* bacteria have been shown to prefer to perform their quasi-circular motion in a clockwise fashion, when swimming close to a solid boundary, while they move counterclockwise when swimming near to an interface (for example, an air–liquid interface) [21–25]. The resulting motion is a chiral active Brownian motion. The term *chiral*, referring to the lack of symmetry of an object under mirror-reflection, is used to describe the tendency of swimmers to swim in circles either in a clockwise or a counterclockwise fashion. Therefore, the term *chiral* used in combination with the terms *active Brownian motion* indicates instead that the active particle tends to move in a well-defined way (clockwise or counterclockwise) with respect to a defined normal direction to the surface where the motion happens. Chiral motion is not only observed in living microorganisms, but also in artificial swimmers [26]. If we want to describe the dynamics of such a chiral active particle, we have to take into account that now the orientation varies with a defined *angular velocity ω*:

$$\begin{cases} \dot{\boldsymbol{x}} = \boldsymbol{v} \; \cos \theta \; + \xi\_{\boldsymbol{x}} \\\\ \dot{\mathbf{y}} = \boldsymbol{v} \; \sin \theta \; + \xi\_{\boldsymbol{y}} \; . \end{cases} \tag{7.18}$$
 
$$\begin{cases} \dot{\boldsymbol{\theta}} = \boldsymbol{\omega} \; + \xi\_{\boldsymbol{\theta}} \end{cases} \tag{7.18}$$

According to the standard convention on the direction of angles, *ω >* 0 will be associated with a *counterclockwise* motion, while *ω <* 0 will characterise a *clockwise* motion. When translated to a finite-difference equation formalism, we have

$$\begin{cases} \mathbf{x}\_{n+1} = \mathbf{x}\_n + v \begin{bmatrix} \cos \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{x},n} \end{bmatrix} \\\\ \mathbf{y}\_{n+1} = \mathbf{y}\_n + v \begin{bmatrix} \sin \theta \ \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\mathbf{y},n} \end{bmatrix} \\\\ \theta\_{n+1} = \theta\_n + \omega \Delta t + \sqrt{2D\_t \Delta t} \ W\_{\theta,n} \end{cases} \tag{7.19}$$

An example of the appearance of a *chiral* trajectory is given in Fig. 7.4c, d, for dextrogyre and levogyre chiralities, respectively.

# *7.3.4 Gaussian Noise Reorientation Model*

Active motion is not only related to particles that are able to propel themselves in a strict sense. In fact, it has been observed that passive colloids in an *active bath* (for example, in a solution containing motile bacteria) present an effective dynamics that is quite different from standard passive Brownian dynamics. The presence in the solution of motile living microorganisms changes the motion of the suspended colloidal particles in such a way that they behave as *effective active colloids* for which the reorientation mechanism can present an enhanced diffusion constant [27– 30]. The model commonly used for describing such a situation is

$$\begin{cases} \dot{\boldsymbol{x}} = \boldsymbol{v} \; \cos \theta \; + \xi\_{\boldsymbol{x}} \\\\ \dot{\mathbf{y}} = \boldsymbol{v} \; \sin \theta \; + \xi\_{\boldsymbol{y}} \; . \end{cases} \tag{7.20}$$

This equation is practically Eq. (7.9), with the only difference that the noise term *"θ* is not characterised by the rotational diffusion constant *<sup>D</sup>*<sup>r</sup> <sup>=</sup> *<sup>k</sup>*B*T /(*8*πηR*3*)*, but by a different *D*˜r, usually larger than the one dictated by the size of the particle. Because the dependence of the noise term is still a Gaussian, the model with enhanced diffusion constant for the orientation is referred as *Gaussian noise* model. An example of the appearance of a *Gaussian noise* trajectory is given in Fig. 7.4e.

# **7.4 More Complex Models**

Until now, we have learned to model the active motion of a single spherically symmetric particle in a 2D homogeneous environment. In this section, we will extend the model to include (1) the 3D case of a single spherically and nonspherically symmetric particle, (2) the presence of external fields, (3) the presence of multiple interacting particles, and (4) the presence of a multiplicative noise (that is, noise that depends on the state of the system and not only on external variables like the temperature). All these extensions are used widely in simulations for the description of the behaviour of real active systems. For example, one can have a colloidal solution of Janus particles that are concentrated enough to come close to each other, so that their (steric, electrostatic, hydrodynamic) interaction should be considered for a correct description of their collective behaviour [31–33] or a colloidal particle in a bacterial bath in the presence of an external optical potential inducing on the particles a driving force in the direction of the light intensity gradient [30]. Another common case are active dynamics of non-spherical particles like elongated rods [34–37] or chiral particles [38–40] in 3D or the presence of a boundary [41, 42] that alters the value of the diffusion constant in its proximity via a diffusion gradient [43].

# *7.4.1 Non-Spherical Particles*

If the particle is non-spherical, then the effect of the thermal noise is described by a diffusion matrix <sup>D</sup> of dimension 6×6, which takes into account all the translational and rotational modes of the non-spherical particle and possible correlations between them, including purely translational and purely rotational modes [44, 45]. The diffusion matrix <sup>D</sup> is always symmetrical (<sup>D</sup> <sup>=</sup> <sup>D</sup>*<sup>T</sup>* ) and is represented as:

$$\mathbb{D} = \begin{bmatrix} \mathbb{D}\_{\text{tt}} \ \mathbb{D}\_{\text{tr}} \\ \mathbb{D}\_{\text{rt}} \ \mathbb{D}\_{\text{rr}} \end{bmatrix}, \tag{7.21}$$

where Dtt is the diffusion term for the purely translational modes, Drr is the diffusion term for the purely rotational modes, and Dtr and Drt are the off-diagonal terms describing roto-translational effect of the thermal agitation that might arise for particular particles shapes breaking mirror symmetry. With the same diffusion matrix formalism, we can describe the Brownian motion of a spherical particle, in which case <sup>D</sup> is a diagonal matrix, with <sup>D</sup>tt <sup>=</sup> *<sup>D</sup>*tI3×<sup>3</sup> and <sup>D</sup>rr <sup>=</sup> *<sup>D</sup>*rI3×3.

In case the passive motion of the spherical particle is in three dimensions, an equation analogous to Eq. (7.5) describes the dependence along all the three translation coordinates. However, because purely rotational and roto-translational terms may affect the orientation of the particle, it is a good practice to consider all the 6 degrees of freedom together. The analogue of Eq. (7.5) in 3D is

$$
\begin{bmatrix}
\dot{\mathbf{r}} \\
\dot{\boldsymbol{\theta}}
\end{bmatrix} = \begin{bmatrix}
\boldsymbol{\xi}\_{\mathbf{t}} \\
\boldsymbol{\xi}\_{\mathbf{r}}
\end{bmatrix} \tag{7.22}
$$

that, rewritten in the finite-difference formalism, becomes

$$
\begin{bmatrix}
\Delta \mathbf{r} \\
\Delta \boldsymbol{\theta}
\end{bmatrix} = \begin{bmatrix}
\Xi\_t \\
\Xi\_r
\end{bmatrix} \tag{7.23}
$$

where the noise terms <sup>t</sup>*,* <sup>r</sup> are generated each step through a multi-varied Gaussian random number distribution satisfying the requirement of average equal to (0, 0, 0, 0, 0, 0) and variance matrix equal to 2 D *t*.

The increments **r** represent the displacement of the centre of mass of the particle with respect to the previous position.

The angular increments *θ* need to be handled with more care. The rotation of the particle is a free 3D rotation. Therefore, the increment *θ* represents a set of three increment, one for each rotation axis (that is, one for each of the unit vectors defining the set of axis of the particle reference frame) [45]. As the composition of rotations among different axes is not commutative, one should choose a time step *t* such that the various increments *θ* are small enough to ensure commutativity, within a certain error range. Each time step a rotation <sup>R</sup> <sup>=</sup> <sup>R</sup>*<sup>x</sup>* <sup>R</sup>*<sup>y</sup>* <sup>R</sup>*<sup>z</sup>* will be applied to the unit vectors determining the particle reference frame in order to find the rotated configuration. The rotation leaves the position of the centre of mass unaltered and, for the dynamics to make sense, should be such that the composition of the rotation matrices R*<sup>x</sup>* , R*<sup>y</sup>* , and R*<sup>z</sup>* is the same, within the order of *t*.

A cleaner approach to the issue takes into account the algebra of the generators of the rotation matrix group. A rotation of an angle *φ* around the *x*, *y*, *z* axis is written, in 3D, as a matrix acting on the component of a vector along the base unit vectors, and in the specific case is

#### 7 Numerical Simulations of Active Brownian Particles 223

$$\begin{aligned} \mathbb{R}\_{\boldsymbol{\chi}}(\phi) &= \begin{bmatrix} 1 & 0 & 0 \\ 0 \cos \phi - \sin \phi \\ 0 \sin \phi & \cos \phi \end{bmatrix} \quad \mathbb{R}\_{\boldsymbol{\chi}}(\phi) = \begin{bmatrix} \cos \phi & 0 \ \sin \phi \\ 0 & 1 & 0 \\ -\sin \phi & 0 \cos \phi \end{bmatrix} \\ \mathbb{R}\_{\boldsymbol{\varepsilon}}(\phi) &= \begin{bmatrix} \cos \phi - \sin \phi \ 0 \\ \sin \phi & \cos \phi \ 0 \\ 0 & 0 & 1 \end{bmatrix} \end{aligned} \tag{7.24}$$

Each of this matrices can be written as an exponential of the generator matrices G*<sup>x</sup>* , G*<sup>y</sup>* , G*z*:

$$\begin{aligned} \text{G}\_{\text{X}} &= \begin{bmatrix} 0 \ 0 \ 0 \\ 0 \ 0 \ -1 \\\\ 0 \ 1 \ 0 \end{bmatrix} \quad \text{G}\_{\text{y}} = \begin{bmatrix} 0 & 0 \ 1 \\\\ 0 & 0 \ 0 \\\\ -1 \ 0 \ 0 \end{bmatrix} \quad \text{G}\_{\text{\mathbb{Z}}} = \begin{bmatrix} 0 \ -1 \ 0 \\\\ 1 \ 0 \ 0 \\\\ 0 \ 0 \ 0 \end{bmatrix} \end{aligned} \tag{7.25}$$

in the following way:

$$\mathbb{R}\_{\mathbf{x}}(\boldsymbol{\phi}) = e^{\boldsymbol{\phi} \cdot \mathbb{G}\_{\mathbf{x}}} = \sum\_{n=0}^{+\infty} \frac{\boldsymbol{\phi}^{n}}{n!} \, \mathbb{G}\_{\mathbf{x}}^{n}; \quad \mathbb{R}\_{\mathbf{y}}(\boldsymbol{\phi}) = e^{\boldsymbol{\phi} \cdot \mathbb{G}\_{\mathbf{y}}} = \sum\_{n=0}^{+\infty} \frac{\boldsymbol{\phi}^{n}}{n!} \, \mathbb{G}\_{\mathbf{y}}^{n};$$

$$\mathbb{R}\_{\boldsymbol{\varepsilon}}(\boldsymbol{\phi}) = e^{\boldsymbol{\phi} \cdot \mathbb{G}\_{\mathbf{t}}} = \sum\_{n=0}^{+\infty} \frac{\boldsymbol{\phi}^{n}}{n!} \, \mathbb{G}\_{\mathbf{z}}^{n}. \tag{7.26}$$

Instead of performing a small finite rotation about one axis at a time, where we might have issues because of non-commutativity, it is wiser to perform directly a rotation around the axis individuated by the vector *ω*

$$\boldsymbol{\omega} = (\boldsymbol{\omega}\_{\boldsymbol{\lambda}}, \boldsymbol{\omega}\_{\boldsymbol{\lambda}}, \boldsymbol{\omega}\_{\mathbb{Z}}) = \left(\frac{\Delta\theta\_{\boldsymbol{\lambda}}}{\Delta t}, \frac{\Delta\theta\_{\boldsymbol{\lambda}}}{\Delta t}, \frac{\Delta\theta\_{\mathbb{Z}}}{\Delta t}\right) \tag{7.27}$$

of the angle *θ* = + *(θx )*<sup>2</sup> + *(θy )*<sup>2</sup> + *(θz)*<sup>2</sup> = *t* |*ω*| that is exactly the rotation acting on the particle. Such rotation matrix can be written as the exponential of the skew-symmetric matrix *θ*×:

$$\mathbb{R}\_{\hat{\omega}}(\theta) = e^{\theta\_{\times}} = \mathbb{I} + \sum\_{n=1}^{+\infty} \frac{1}{n!} \theta\_{\times}^{n},\tag{7.28}$$

where

$$
\boldsymbol{\theta}\_{\times} = \Delta t \begin{bmatrix} 0 & -\boldsymbol{\alpha}\_{\mathbb{Z}} & \boldsymbol{\alpha}\_{\mathbb{Y}} \\ \boldsymbol{\alpha}\_{\mathbb{Z}} & 0 & -\boldsymbol{\alpha}\_{\mathbb{X}} \\ -\boldsymbol{\alpha}\_{\mathbb{Y}} & \boldsymbol{\alpha}\_{\mathbb{X}} & 0 \end{bmatrix} = \begin{bmatrix} 0 & -\Delta \theta\_{\mathbb{Z}} & \Delta \theta\_{\mathbb{Y}} \\ \Delta \theta\_{\mathbb{Z}} & 0 & -\Delta \theta\_{\mathbb{X}} \\ -\Delta \theta\_{\mathbb{Y}} & \Delta \theta\_{\mathbb{X}} & 0 \end{bmatrix} \tag{7.29}
$$

Because of the properties of *<sup>θ</sup>*×, namely *<sup>θ</sup>* <sup>3</sup> <sup>×</sup> = −*<sup>θ</sup>* <sup>2</sup> *<sup>θ</sup>*×, can be written as:

$$\mathbb{R}\_{\hat{\mathfrak{w}}}(\theta) = e^{\theta\_{\chi}} = \mathbb{I} + \frac{\sin \theta}{\theta} \,\theta\_{\chi} + \frac{1 - \cos \theta}{\theta^2} \,\theta\_{\chi}^2. \tag{7.30}$$

Equation (7.30) is the Rodrigues formula [46] for the rotation of an angle *θ* around a direction *ω*ˆ that is exactly the rotation of the axes of the particle reference frame due to a rotational noise term (stochastic rotational torque) of *ξ* r.

# *7.4.2 External Fields*

In many situations the particles feel the effects of external force or torque fields. In the case of colloidal particles suspended in a solution, these external fields can be due to the optical force generated by the presence of an optical potential [47], the presence of a hydrodynamic flux [48], the combined effect of weight and buoyancy that, for particles with an asymmetric mass distribution, can give rise to a torque leading to gravitaxis [49, 50], or the presence on an external magnetic field for paramagnetic particles [51–53]. Moreover, in many current realisations of artificial microswimmers electric [54], magnetic [55, 56], and acoustic fields [57, 58], or a combination of them [59, 60], play an important role to activate the selfpropelling mechanism, to control the swimming behaviour, or to confine the active particles [61]. A variety of models have been developed in order to understand the mechanism of the self-propulsion and its proper simulation [62–65]. When writing the equations of motion for the total force **F**tot and torque **T**tot acting on a particle, we need to include the contributions of the external fields **F**ext and **T**ext as:

$$\begin{cases} \mathbf{F}\_{\text{tot}} = -\boldsymbol{\gamma}\_{\text{t}} \mathbf{v} + \mathbf{F}\_{\text{ext}} + \mathbf{F}\_{\text{thermal}} \\\\ \mathbf{T}\_{\text{tot}} = -\boldsymbol{\gamma}\_{\text{r}} \boldsymbol{\omega} + \mathbf{T}\_{\text{ext}} + \mathbf{T}\_{\text{thermal}} \end{cases} \tag{7.31}$$

that, in the overdamped limit, become

$$\begin{cases} \mathbf{v} = \frac{\mathbf{F}\_{\text{ext}}}{\mathcal{\mathcal{Y}}\_{\text{f}}} + \boldsymbol{\xi}\_{\text{t}} = \frac{D\_{\text{t}}}{k\_{\text{B}}T} \mathbf{F}\_{\text{ext}} + \boldsymbol{\xi}\_{\text{t}} \\\\ \boldsymbol{\omega} = \frac{\mathbf{T}\_{\text{ext}}}{\mathcal{\mathcal{Y}}\_{\text{r}}} + \boldsymbol{\xi}\_{\text{r}} = \frac{D\_{\text{r}}}{k\_{\text{B}}T} \mathbf{T}\_{\text{ext}} + \boldsymbol{\xi}\_{\text{r}} \end{cases} \tag{7.32}$$

#### 7 Numerical Simulations of Active Brownian Particles 225

In the case of an active Brownian particle in 2D, the presence of an external force **F**ext and/or torque *T*ext*,z* is included in the equations as follows:

$$\begin{cases} \dot{\mathbf{x}} = \upsilon \, \cos \theta \, + \frac{D\_{\mathbf{t}}}{k\_{\mathbf{B}} T} F\_{\text{ext}, \mathbf{x}} + \xi\_{\mathbf{x}} \\\\ \dot{\mathbf{y}} = \upsilon \, \sin \theta \, + \frac{D\_{\mathbf{t}}}{k\_{\mathbf{B}} T} F\_{\text{ext}, \mathbf{y}} + \xi\_{\mathbf{y}} \,. \end{cases} \tag{7.33}$$
 
$$\dot{\theta} = \frac{D\_{\mathbf{r}}}{k\_{\mathbf{B}} T} T\_{\text{ext}, \mathbf{z}} + \xi\_{\theta}$$

In the general case of an active particle in a 3D environment, the effect of an external force **F**ext and torque *T*ext is included as follows:

$$
\begin{bmatrix}
\dot{\mathbf{r}} \\
\dot{\boldsymbol{\theta}}
\end{bmatrix} = \frac{\mathbb{D}}{k\_{\rm B}T} \begin{bmatrix}
\mathbf{F}\_{\rm ext} \\
\mathbf{T}\_{\rm ext}
\end{bmatrix} + \begin{bmatrix}
\boldsymbol{\xi}\_{\rm t} \\
\boldsymbol{\xi}\_{\rm r}
\end{bmatrix} \,. \tag{7.34}
$$

This formalism takes into account the effect of possible roto-translational effects, and reduces to the standard set of separable equations in the case translational and rotational motion are independent from each other. Besides the mathematical formalism, the presence of an external field might induce important features in the behaviour of a system of active Brownian particles. We will see an example of this in more details after discussing the modalities of simulating systems with more than one particle.

# *7.4.3 Interacting Particles*

The study of the behaviour of a single active Brownian particle is an important starting point for understanding the behaviour of a system of multiple particles. When multiple particles are present, interactions among the particles may significantly change the dynamics. Such interactions, moreover, can affect the collective behaviour of the system, by determining the emergence of cooperative phenomena, like, for example, phase separation or the formation of dynamic clusters. Moreover, the presence of activity itself might change dramatically the behaviour of a system: the same interactions present in a system of active or passive particles may give rise to totally different outcomes, just because in one case we have the additional feature of self-propulsion.

Usually interactions among active particles are divided into two main categories: *non-aligning* and *aligning* interactions. *Non-aligning* interactions might be attractive or repulsive, might depend on the relative position of the particles, but are *independent* on their *direction of motion*. *Aligning* interactions instead depend on the particles' direction of motion as well. Often such interactions tend to align the particles in their direction of motion, favouring phenomena such as swarming, like, for example, in the Vicsek model [66]. Theoretically, it has been predicted that active particles responding to chemical signalling or to hydrodynamic interactions may interact mutually with an effective aligning interaction [67–70].

We start our brief overview of interacting active particles with steric interactions, which prevent the particles from occupying the same volume, and we continue with two kind of aligning interaction, one characterising the Vicsek model, and another one giving a torque on particles at very short distance from one another.

### **Steric Interactions**

Usually colloidal particles have a well-defined, rigid shape, and it is not possible for them to overlap. This steric interaction, that is present between passive and active colloids, together with the presence of active particles might give rise to interesting phenomena, like the formation of metastable clusters, even though the interaction itself is not attractive. For example, a set of passive colloids does not form spontaneously any cluster except that in the presence of a strong attractive interaction or of a driving force that pushes the passive particles all in the same space [30]. A set of active particles, instead, because of their activity can form metastable cluster in dilute suspensions even in the presence of repulsive mutual interactions [71–74]. In fact, depending on the propulsion velocity *v* and on the rotational diffusion time *τ*r, two active particles might collide together and stay locked because of the persistency of their active motion. Such clusters then can break apart over a time scale dictated by *τ*<sup>r</sup> because the reorientation process makes one of the particle to point away [75, 76].

For what concerns the simulations of the cases where the particles are rigid and have a finite dimension, steric interactions can be implemented via the socalled *hard-sphere correction*, shown in Fig. 7.5, in order to avoid non-physical situations where the particles overlap and share in part the same volume. This is

**Fig. 7.5** Steric interactions. Schematic for hard-spheres interaction between two particles, where the particles are displaced away from each other whenever they get superposed

done by checking the mutual distance between couples of particles after each time step: if an overlap is found (for example, when the distance between the particles' centres is smaller than the diameter of the particles), then both particles are moved away from each other along the direction connecting the centres in such a way that their distance becomes exactly one diameter, so they touch each other, but are not superposed. This procedure has been used in [77] and, although it does not implement an elastic collision among particles like the other approaches [9] where conservation of energy and momentum is enforced, it is computationally lighter. However, the allowed superposition among the superposing particles is usually a small fraction of the particle volume, so, the time step of the simulation should be accordingly small, not to lead to unphysical effects, like a state of excessive superposition or to the unphysical conditions of two nearby particles that, instead of colliding, miss each other because of an excessive value of the calculated displacement. In case numerous particles are in close contact to each other, like, for example, when they are part of a cluster, one should check iteratively each couple of neighbouring particles, so to ensure that, after each readjusting move, the condition of non-superposition remains valid. Such implementation has been shown to be equivalent to introduce a short-range repulsive potential that prevents superposition. However, computationally the hard-spheres correction is preferred, because for the correct dynamics with the short-range, repulsive potential one would have to use a much smaller time step, which is unpractical.

### **Vicsek Model**

The Vicsek model [66] is one of the simplest models to feature alignment and swarming in a system of active particles. In the Vicsek model, the particles move with constant speed and interact via the following aligning interaction: each particle can sense the orientation of the particles within a determined flocking radius *R*flocking, and at each step reorients itself according to the average value of the neighbouring particles' orientation (Fig. 7.6). If, for each particle *i* in the system, we define S*<sup>i</sup>* as the set of neighbouring particles at the considered instant in time, then the equations describing the system are

$$\begin{cases} \dot{\mathbf{x}}\_l = v \cos \theta\_l \\\\ \dot{\mathbf{y}}\_l = v \sin \theta\_l \\\\ \theta\_l = \left< \theta\_j \right>\_{j \in \mathcal{S}\_l} + \xi\_{\theta, l} \end{cases} \tag{7.35}$$

Varying the range of the parameters describing the system, within this model one can obtain a phase transition from undirected motion to unidirectional motion, as a function of the particles' density: beyond a given density, whatever the initial conditions on position and orientation, the particles will come to move in the same direction, thanks to the aligning interaction. If we include in Vicsek model the steric interaction, we can obtain crystallisation at high densities and low-noise intensity.

**Fig. 7.6** Vicsek model: reorientation mechanism. The status at a given instant is represented. For establishing the orientation at the following instant of the particle represented in orange, one has to take the average of the orientations of the particles lying within the flocking radius (green area). In the second panel the new orientation is shown in red. The previous orientation is shown in light blue, for a comparison

### **Short-Range Aligning Interactions**

Here we present another mechanism of aligning interaction [78], which can describe the effective behaviour of particles interacting with some aligning hydrodynamic interactions, or of bacteria moving in a background of colloidal particles, or of people moving in a crowd. We have a set of finite-size active particles in 2D which, when closer than a given distance *R*align, interact by means of a torque. The behaviour featured by a particle is a tendency to reorient the direction of its active displacement towards the particles in the forward direction of motion, and away from the particles that are lying backwards. If **k**ˆ is the unit vector direction perpendicular to the plane of motion, the torque acting on a given particle will be along **k**ˆ. The torque acting on a given particle *n* because of the presence of another particle *i* within the interaction distance is proportional to the cosine of the angle formed by the direction connecting the centre of the particle to the particle *i* (**r**ˆ*ni*) with the direction of the velocity of the particle *n* (**v**ˆ*n*), and inversely proportional to the square of the distance among the particles.

In order to implement the behaviour described above, the torque should be along the vector **<sup>v</sup>**ˆ*n*×**r**ˆ*ni*, and proportional to *(***v**ˆ*n*·**r**ˆ*ni)/r*<sup>2</sup> *ni*. Such coefficient can be positive, negative, or zero, depending on **v**ˆ*<sup>n</sup>* · **r**ˆ*ni*, and this is the ingredient that gives the wanted behaviour of reorienting towards particles set in the front, and turning away from the particles lying at the back (Fig. 7.7).

Expressing the total torque on a particle using the coefficient along the given **k**ˆ direction, we have that the total torque felt by the particle *n* can be modelled according to:

**Fig. 7.7** Short-range aligning interaction: reorientation mechanism. In (**a**) a finite set of relative positions and relative motions are shown. The particle represented in red (centre of the scheme) exerts a torque on the particle in black. Such torque obeys the mechanism in Eq. (7.36) and the reorientation angle it induces is represented, for the various case considered, by a red oriented arc. In (**b**–**e**) a few cases characteristic of the induced dynamics with two and three particles are represented. In particular, it is shown the situations leading to stable clusters of 2 particles (**b**, **f**) and 3 particles (**g**–**i**). In the cases represented in (**c**–**e**), the relative arrangement of the particles does not allow to form any cluster, but part away from each other (**c**, **e**) or proceed at constant speed and constant distance, with one of the particles following the other one. Reproduced from Nilsson and Volpe [78] (licenced under CC BY 3.0)

$$T\_n = T\_0 \sum\_{l \in \mathcal{S}} \frac{\hat{\mathbf{v}}\_n \cdot \hat{\mathbf{r}}\_l}{r\_l^2} \,\hat{\mathbf{v}} \times \hat{\mathbf{r}}\_l \cdot \hat{\mathbf{k}},\tag{7.36}$$

where S is the set of particles within the radius of interaction *R*align from particle *n*, and *T*<sup>0</sup> is a parameter setting the strength of the interaction. So, in general, the model is described by the following equations:

$$\begin{cases} \dot{x}\_n = v \cos \theta\_n + \xi\_{x,n} \\ \dot{y}\_n = v \sin \theta\_n + \xi\_{y,n} \ . \end{cases} \tag{7.37}$$
 
$$\dot{\theta}\_n = T\_n + \xi\_{\theta, n}$$

where *Tn* is the torque obtained from Eq. (7.36) for each particle *n*. We note that in this specific model the rotational friction coefficient that usually is present in front of the torque is set to 1.

Such a model, though simple, displays a rich set of behaviours. In fact, depending on the rotational noise conditions and on the particle concentration, we can find a gaseous phase, where the particles are moving independently, a phase where metastable clusters composed by few particles are present, and a more pronounced clustering phase where clusters of more particles are formed. It has been shown [78] that the clustering transition occurs for the critical noise level of *T*0*/*4*R*<sup>2</sup> align. We expect also that the density should have an effect on the clustering transition; however, this aspect has not been investigated in the original paper. It has been also shown that in a mixed system of a few active and many passive particles, where the active particles interact with an alignment term as in Eq. (7.36) with the other active particles and with a term of the opposite sign where interacting with the passive particles, the interaction can lead to the formation of metastable channels [78].

# *7.4.4 Multiplicative Noise*

In all the previous examples we have dealt with a uniform diffusion constant and therefore with noise conditions uniform in time, independent from the status of the system. However, in many real systems the noise may depend on the configuration of the system: for example, it is known that the presence of a rigid planar wall induces a gradient in the thermal diffusion coefficient, in such a way that the closer the particle to the planar boundary, the smaller the diffusion coefficient [79–82]. In particular, for the direction perpendicular to the planar surface, the diffusion coefficient *D*<sup>⊥</sup> becomes zero when the particle finds itself in contact with the wall [79, 83]. An analogous effect happens also when two particles in the bulk of a solution come close to each other: the presence of one of the particle in the proximity of the other alters the diffusion coefficient of the second particle, and vice versa [84].

In general, a multiplicative noise is described as:

$$
\xi = \sqrt{(\mathcal{D}\,D(\mathbf{x}))} W,\tag{7.38}
$$

where the diffusion constant depends on the value of the variable describing the status of the system. To fix our ideas, let us suppose it indicates the distance of a particle from the planar wall. In such cases, the full Langevin equation for the particle would be Eq. (7.3), with the only difference that the noise term has a dependence on the variable *x*. The corresponding equation is [15] 2

$$
\dot{\mathbf{x}} = \frac{D(\mathbf{x})}{k\_\mathrm{B}T} F\_{\mathrm{ext}} + \frac{\mathrm{d}D(\mathbf{x})}{\mathrm{d}\mathbf{x}} + \xi(\mathbf{x}), \tag{7.39}
$$

<sup>2</sup>The equation here is written in the Îto form. Alternative forms are possible, see details in Ref. [15].

where the additional term <sup>d</sup>*<sup>D</sup>* <sup>d</sup>*<sup>x</sup>* is the *spurious drift* and is necessary for the correct convergence of the solution to the original Langevin equation with multiplicative noise [14]. Finally, the corresponding finite-difference equation describing a particle in a gradient of diffusion is

$$
\Delta \mathbf{x}\_{n+1} = \mathbf{x}\_n + \frac{\mathbf{d}D(\mathbf{x}\_n)}{\mathbf{d}\mathbf{x}} \Delta t + \sqrt{2D(\mathbf{x}\_n)\Delta t} \ W\_n. \tag{7.40}
$$

# **7.5 Numerical Examples**

Here we show two examples of collective behaviours emerging in a system of selfpropelling particles.

# *7.5.1 Living Crystals*

We have performed a simulation of a system of *N* = 100 active particles considering steric interaction and a short-range phoretic attraction on the line of the model employed in [85], which was proposed to reproduce the experimentally observed living crystals emerging in a solution with light-activated colloids. In this model, the phoretic attractive force, due to the advective flow generated by the decomposition of hydrogen peroxide on the exposed hematite surface, induced an attractive velocity *vP (r)* for the active particles that scales according to the inverse of the square distance among the particles:

$$
v\_P(r) = v\_{P0} \frac{r\_0^2}{r^2}.\tag{7.41}$$

Such dependence is the expected behaviour for a phoretic attraction to a reaction source, and fit well the observed experimental behaviour [85].

Varying the strength of this attractive interaction, which is the magnitude *vP*<sup>0</sup> of the attractive velocity at a given reference distance *r*0, and the self-propulsion velocity of the particle *v*, in the case particles are clustering, we can observe the formation of clusters of different average sizes. Such clusters are actively changing and rearranging themselves, and their stability depends also on the relative ratio of the interaction velocity at short distances (comparable with the diameter of the particles) with the self-propulsion velocity of the single active particle. For a given active velocity *v*, if the attractive reference velocity *vP*<sup>0</sup> for a given *r*<sup>0</sup> equal to the diameter of the particles is comparable to (or bigger than) the self-propulsion velocity *v*, then the clusters will tend to form. The bigger the reference value *vP*<sup>0</sup> of the attractive phoretic interaction, the more stable and bigger the cluster formed.

In Fig. 7.8 we provide two examples of simulations of active Brownian particles interacting via an attractive phoretic force that induces an attractive phoretic velocity given by Eq. (7.41). The radius of the particles is set to *R* = 1*.*0μm, and their speed is set to *<sup>v</sup>* <sup>=</sup> <sup>2</sup>μm s−1. In both the simulation presented, the initial configuration of positions and orientations is randomly generated. We consider a square arena with a side of *L* = 100μm (concentration of *φ*<sup>V</sup> = 0*.*03) with periodic boundary conditions; the concentration of the particles can be adjusted by varying the size of the arena. The time step of the simulation is set to *t* = 0*.*1 s. With these parameters we observe the formation of metastable clusters as in Ref. [85]. By decreasing the strength of the interaction, we can see that the clusters become less stable. In Fig. 7.8 a few screenshots of the time evolution are presented.

In the dynamics represented in Fig. 7.8a–c, the reference phoretic velocity *vP*<sup>0</sup> is set to 2μm s−<sup>1</sup> at a distance between colloids of *<sup>r</sup>*<sup>0</sup> <sup>=</sup> <sup>2</sup>μm, and the radius of the

**Fig. 7.8** Living crystals. Simulation of 100 active particles of radius *R* = 1*.*0μm with phoretic, short-range attractive interaction. The system forms metastable clusters. Depending on the particle concentration, on the noise conditions, and on the strength of the attractive interaction, the clusters may become more or less stable. In the simulation, the noise affecting the position and the orientation is set to the thermal noise felt by a spherical particle at equilibrium in a thermal bath. In (**a**–**c**), the phoretic interaction is set in such a way to give a phoretic attractive velocity equal to the speed of the active particle at a reference distance of 2*.*0μm between the particles; the clusters formed are of the size of 10–20 particles, and are not too tightly bond and visibly evolve in the time scale of the simulation. In (**d**–**f**), the phoretic interaction was set to give a phoretic attractive velocity equal to the speed of the active particle at a reference distance of 2*.*5μm between the particles; in this case the clusters formed are more compact, and, as the time pass, tend to form bigger clusters

interaction is set to *R*int*,*phor = 10μm [85]. We can observe that there is a tendency to form clusters, but they are not bigger than 20–30 particles each, and a consistent fraction of the particles are still freely moving. In the dynamics represented in Fig. 7.8d–f, instead, the reference phoretic velocity *vP*<sup>0</sup> is set to 2μm s−<sup>1</sup> at a reference distance between colloids of *r*<sup>0</sup> = 2*.*5μm, i.e., the attractive interaction is stronger. Also in this case the radius of the interaction is set to *R*int*,*phor = 10μm. As the attractive interaction is stronger, now the particles tend to form bigger clusters and less particles are freely moving independently.

# *7.5.2 Colloids with Short-Range Aligning Interaction*

We consider a system of active particles interacting via the aligning interaction described in Eq. (7.36) and with a dynamics described by Eq. (7.37), following [78], where we explicitly take into account the proper value of the friction coefficient of a spherical particle in water for calculating the effect of the torque. We perform a simulation with *N* = 400 particles in a low-noise condition, with colloids of dimension *R* = 1*.*0μm, setting the interaction radius *R*align = 2*.*5μm, a speed of *<sup>v</sup>* <sup>=</sup> <sup>0</sup>*.*1μm s−1, and torque *<sup>T</sup>*<sup>0</sup> <sup>=</sup> <sup>0</sup>*.*5 radμm2 (Eq. (7.36)). With a simulation time step *t* = 0*.*05 s, we observe that clusters start forming and, because of the low-noise level, persist for a long time. In Fig. 7.9 a few screenshots of the time evolution are presented. In the chosen conditions of noise, range of the interaction, and strength of the interaction, we observe that mainly small clusters of two, three particles are formed. We observe also that some simple configurations like the ones shown in the circled regions of Fig. 7.9b–c (cluster of two particles facing each other with opposite orientation, and cluster of three particles organised in an equilateral triangle configuration with their orientation facing the centre of the triangle) are quite stable in time, and they do not move, and this happens because of

**Fig. 7.9** Short-range aligning interactions. Screenshot of a simulation with initial random distribution of position and velocities. Stable clusters up to five particles form in the time of the simulation. As expected, small clusters of 2 and 3 particles are quite stable in time, given the low-noise level condition chosen, as can be seen comparing the configurations of the clusters evidenced in the circles

the interaction mechanisms that tends to preserve the configuration and the relative low level of noise. Such small stable clusters break apart or transform only when a travelling particle or cluster comes in their proximity and, driven by the aligning interaction, targets them, as it is shown in the Supplementary Movies of Ref. [78]. Varying the conditions of noise, the strength of the interaction, the range of the interaction, one can reproduce various regimes, from a gaseous phase, with mainly small clusters and single isolated particles, to a cluster phase, with mainly clusters of various dimensions, the distribution of the size of the cluster being determined by the chosen condition of noise, strength of the interaction, and range of the interaction.

# **7.6 Conclusions**

We have provided a concise, self-contained introduction to the simulation of active Brownian systems using Brownian dynamics and finite-difference equations. We have illustrated how to describe the dynamics of a micron-sized active particle starting from a model spherical particle, how to account for white thermal noise for translational and rotational motion, and active displacement, starting from a motion in two dimensions. We have briefly explained the way to proceed to simulate the full 3D motion, with the proper formalism also for non-spherically symmetric objects, and reviewed a few type of interactions that have been used in the description of the dynamics of active systems, and, depending on the choice of the parameters, present a variety of complex behaviours. We have tried to provide an overview that gives hints of the up-to-date research on active system, where simulation is often an important tool to test the understanding of the key mechanism determining the behaviour of the particular system under investigation. At the same time, we have kept the discussion as simple as possible and given a few numerical examples to provide an effective starting point for the students approaching numerical simulation of Brownian dynamics for the first time.

# **References**


### 7 Numerical Simulations of Active Brownian Particles 235


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 8 Active Fluids Within the Unified Coloured Noise Approximation**

**Umberto Marini Bettolo Marconi, Claudio Maggi, and Alessandro Sarracino**

# **8.1 Introduction**

The aim of this chapter is to provide an overview of some recent advances and open problems in the statistical description of active particles. In particular, we shall illustrate a theoretical approach based on the so-called unified coloured noise approximation (UCNA).

Active matter is composed of systems which are able to convert energy from the environment into directed motion. Every element of an active matter system can be considered out of equilibrium, in contrast to boundary driven systems, like those subject to a concentration gradient which are locally equilibrated [1–3].

Active systems abound in nature, ranging from flock of birds, structure-forming cytoskeletons of cells to bacterial colonies, but can also be man-made in a laboratory using biological building blocks or synthetic components. Being at the crossroads between biology, chemistry and physics, the subject has drawn the attention of scientists of different areas. In this article, we shall discuss active systems whose behaviour is assimilable to that of some bacteria or self-propelled particles and whose constituents are driven by an external random force and constantly spend energy to move through a viscous medium.

U. M. B. Marconi (-)

C. Maggi

NANOTEC-CNR, Institute of Nanotechnology, Soft and Living Matter Laboratory, Lecce, Italy

A. Sarracino

CNR-ISC, Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy Dipartimento di Ingegneria, Universitá degli Studi della Campania "L. Vanvitelli", Caserta, Italy

© The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_8

Scuola di Scienze e Tecnologie, Università di Camerino, Camerino, Italy e-mail: umberto.marinibettolo@unicam.it

Run and tumble [4, 5] and active Brownian particle (ABP) models [6] have been initially proposed to interpret experiments conducted on bacterial suspensions. More recently, the Gaussian coloured noise (GCN) model has gained a lot of attention. It was introduced with the idea of capturing the peculiar aspect of run and tumble and ABP models (i.e. the persistence of the trajectories of the active particles) and of reducing their mathematical complexity. In the GCN the components of the active force have a Gaussian distribution and are exponentially correlated in time with a characteristic time, *τ* . By applying to the GCN model an adiabatic elimination of the fast degrees of freedom one obtains the UCNA [7, 8].

The UCNA [9] has the special property that its configurational steady state distribution is known, and that many stationary properties can be estimated. Employing this approximation, we present a description of a model of *N* mutually interacting active particles in the presence of external fields and characterise its steady state behaviour. Within the UCNA, we show that it is possible to develop a statistical mechanical approach similar to the one employed in the study of equilibrium liquids and to obtain the explicit form of the many-particle distribution function by means of the multidimensional unified coloured noise approximation. Such a distribution plays a role analogous to the Gibbs distribution in equilibrium statistical mechanics and provides a complete information about the microscopic steady state of the system. From here we develop a method to determine the oneand two-particle distribution functions in the spirit of the Born–Green–Yvon (BGY) equations of equilibrium statistical mechanics [10]. The resulting equations which contain extra-correlations induced by the activity allow determining the stationary density profiles in the presence of external fields, the pair correlations and the pressure of active fluids. In the low-density regime we obtain the effective pair potential *φ* acting between two isolated particles separated by a distance, *r*, showing the existence of an effective attraction. We apply the equations to different problems ranging from the study of the swim pressure, its relation to the mobility, to the investigation of the stationary state induced by a moving object in a "bath" of active particles.

Before closing this short introduction, we mention the fact that the UCNA method has been applied to the study of the effect of self-propulsion on a meanfield order–disorder transition [11]. Starting from a *φ*<sup>4</sup> scalar field theory subject to an exponentially correlated noise, the UCNA allows us to map the non-equilibrium active dynamics onto an effective equilibrium one. One can study the evolution of the second-order critical point as a function of the noise parameters: the correlation time, *τ* , and the noise strength, *D*. Our results suggest that the universality class of the *φ*<sup>4</sup> equilibrium model remains unchanged.

# *8.1.1 The Genesis of the UCNA Model of Active Particles*

In order to understand the physical motivations of the model we shall discuss, it is necessary to give a brief historical account. In modern times, H.C. Berg was the first to introduce a model to describe the motion of bacteria in a viscous medium at small Reynolds number, the so-called run and tumble model, where the bacteria swim with constant velocity until a random tumble event suddenly decorrelates the orientation [4]. The active Brownian particles (ABP) model introduced to make analytical progress describes particles swimming at fixed speed *u* that rotates by slow angular diffusion. The two models have been shown to possess the same coarse-grained fluctuating hydrodynamics by Cates and Tailleur [12]. An advantage of the ABP is the possibility of taking into account external fields acting on the bacteria such as obstacles or gravity and interactions among them. For the *n*-th particle one has

$$
\dot{\mathbf{r}}\_n(t) = v\_0 \mathbf{e}\_n(t) - \frac{\nabla\_n U}{\mathcal{Y}},\tag{8.1}
$$

where *U* represents the total potential energy of an *N* particle systems, whereas *γ v*0**e***<sup>i</sup>* is the so-called active force, whose modulus is fixed, but whose direction **e***n(t)* changes in time by rotational diffusion according to the law

$$
\dot{\mathbf{e}}\_n(t) = \sqrt{D\_r \ \eta\_n \times \mathbf{e}\_n(t)},\tag{8.2}
$$

where *ηn(t)* are Gaussian distributed with zero mean and have time correlations *ηn(t)ηm(t )* = 2**I***δmnδ(t* − *t )*, where *Dr* is a rotational diffusion coefficient.

In spite of the great progress achieved using the ABP, the so-called active Ornstein–Uhlenbeck (AOU) or Gaussian coloured model has gained a great popularity because it has a simpler mathematical structure and lends itself to some analytical treatments due to the Gaussian character of the fluctuations of the active force [13]. The governing equations of such model are very similar to Eq. (8.1)

$$
\dot{\mathbf{r}}\_n(t) = \mathbf{u}\_n(t) - \frac{\nabla\_n U}{\chi} \tag{8.3}
$$

and

$$\dot{\mathbf{u}}\_{\boldsymbol{n}}(t) = -\frac{1}{\tau}\mathbf{u}\_{\boldsymbol{n}}(t) + \frac{\sqrt{D}}{\tau}\boldsymbol{\eta}\_{\boldsymbol{n}}(t) \tag{8.4}$$

with the difference that the active force *γ v*0**e***n(t)* is replaced by *γ* **u***n(t)*, where the components of **u***n(t)* vary between −∞ and ∞.

$$
\langle \mathbf{u}\_n(t) \rangle = 0, \quad \langle \mathbf{u}\_n(t) \mathbf{u}\_m(t') \rangle = \delta\_{mn} \mathbf{I} \frac{D}{\tau} e^{-|t - t'|/\tau}. \tag{8.5}
$$

Within the AOUP we can obtain a series of useful results and in some cases we can solve exactly the equations, As, for instance, in the case of harmonic potentials where the equilibrium distribution is known. In the free-particle case *U* = 0 the free mean squared displacement is *(r(t)*−*r(*0*))*2 = <sup>2</sup>*Dτ* [*t*+*τ (*1−*e*−*t/τ )*]. Thus, a free particle moves ballistically with typical speed *<sup>v</sup>* <sup>=</sup> <sup>√</sup>*D/τ* at short times *(t τ )* and diffusively with diffusion constant D at long times *(t τ )*.

The typical distance travelled by a particle during a ballistic flight is the persistence length <sup>L</sup> <sup>=</sup> <sup>√</sup>*Dτ* . If one observes the system on scales larger than L its properties will be almost indistinguishable from those of a system subject to standard thermal noise with an effective temperature *T* = *Dγ* .

# **8.2 The Unified Coloured Noise Approximation (UCNA)**

In the following, we consider the evolution equation relative to the GCN and from this we shall derive the UCNA equation. For the sake of simplicity, we introduce the vector **x** of components *xi* of index *i* ≡ *(α, n)*, where *α* is the Cartesian component associated with the coordinate of the *n*-th particle. We first differentiate w.r.t. time Eq. (8.3) and eliminate the active force *γ* **u***n(t)* using Eq. (8.4). The resulting equation has the form of an underdamped Langevin equation

$$
\pi \frac{d^2 \mathbf{x}\_l}{dt^2} + \sum\_j \left( \delta\_{lj} + \frac{\pi}{\chi} \frac{\partial^2 U}{\partial \mathbf{x}\_l \partial \mathbf{x}\_j} \right) \frac{d \mathbf{x}\_j}{dt} = -\frac{1}{\chi} \frac{\partial U}{\partial \mathbf{x}\_l} + \sqrt{D} \eta\_l(\mathbf{t}), \tag{8.6}
$$

with space dependent friction matrix:

$$
\Gamma\_{lj} = \delta\_{lj} + \frac{\pi}{\mathcal{Y}} \frac{\partial^2 U}{\partial \mathbf{x}\_l \partial \mathbf{x}\_j}. \tag{8.7}
$$

Neglecting the acceleration term in Eq. (8.6) we shall obtain the so-called unified coloured noise approximation, which is analogous to the Kramers to Smoluchowski reduction and is exact in the limits *τ* → 0 and *τ* → ∞.

One can derive the UCNA equation by the original method of Hänggi and Jung [7]: on a new time scale *<sup>s</sup>* <sup>=</sup> *tτ* <sup>−</sup>1*/*<sup>2</sup> one can recast the Langevin equation into the form

$$\frac{d^2\mathbf{x}\_l}{ds^2} + \sum\_j \Gamma\_{lj} \frac{d\mathbf{x}\_j}{ds} = -\frac{1}{\mathcal{V}} \frac{\partial U}{\partial \mathbf{x}\_l} + \frac{\sqrt{D}}{\mathfrak{r}^{1/4}} \eta\_l(\mathbf{s}),\tag{8.8}$$

with *ηi(s)ηj (s )* = 2*δij δ(s* − *s )* and *ij* <sup>=</sup> *(* <sup>1</sup> *<sup>τ</sup>* <sup>1</sup>*/*<sup>2</sup> *δij* <sup>+</sup> *<sup>τ</sup>* <sup>1</sup>*/*<sup>2</sup> *γ ∂*2*U ∂xi∂xj )*. If *detij* is positive definite the damping is large for both small and large correlation times *τ* and in both cases one can set *<sup>d</sup>*2*xi ds*<sup>2</sup> = 0 and obtain a Markovian approximation of the coloured noise process of the form

$$\frac{d\mathbf{x}\_l}{ds} = -\sum\_j \Gamma\_{ij}^{-1} \left( \frac{1}{\nu} \frac{\partial U}{\partial \mathbf{x}\_j} - \frac{\sqrt{D}}{\pi^{1/4}} \boldsymbol{\eta}\_j(\mathbf{s}) \right), \tag{8.9}$$

which is to be interpreted in the Stratonovich sense.

# *8.2.1 Kinetic Approach*

It is straightforward to write the equation of evolution for the *N*-particle probability distribution density of positions **x** associated with the overdamped limit *<sup>d</sup>*2*xi dt*<sup>2</sup> = 0. It reads

$$\frac{\partial P\_N(\mathbf{x},t)}{\partial t} = \sum\_{ij} \frac{\partial}{\partial \mathbf{x}\_l} \Gamma\_{ij}^{-1}(\mathbf{x},t) \left( D \sum\_k \frac{\partial}{\partial \mathbf{x}\_k} \Gamma\_{jk}^{-1}(\mathbf{x},t) + \frac{1}{\mathcal{V}} \frac{\partial U}{\partial \mathbf{x}\_j} \right) P\_N(\mathbf{x},t). \tag{8.10}$$

It is, however, instructive to derive such an equation from a kinetic argument. We consider Eq. (8.6), define the velocity variable *vi* = ˙*xi* and write the following (stochastically equivalent to Eq. (8.6)) Kramers equation describing evolution of phase-space distribution of *N* particles, *N (xi, vi,t)*:

$$\frac{\partial \Phi\_N}{\partial t} + \sum\_l v\_l \frac{\partial \Phi\_N}{\partial \mathbf{x}\_l} - \sum\_l \frac{1}{\chi \tau} \frac{\partial U}{\partial \mathbf{x}\_l} \frac{\partial \Phi\_N}{\partial v\_l} = \frac{1}{\tau} \sum\_l \frac{\partial}{\partial v\_l} \left( \frac{D}{\tau} \frac{\partial}{\partial v\_l} + \sum\_k \Gamma\_{lk} v\_k \right) \Phi\_N. \tag{8.11}$$

This kind of equation occurs in the study of colloidal solutions and is treated by multiple time scale methods. In general, we cannot solve Eq. (8.11) which involves both the velocity and the position variables. However, we can attack the problem by assuming that the velocity degrees of freedom evolve much faster than the positional degrees of freedom. This type of assumption is done when one reduces the Kramers phase-space equation to the Smoluchowski configurational equation. In fact, we make the ansatz that the phase-space distribution factorises in a spatial part and a velocity part. We construct a time-independent trial phase-space distribution having a factorised form:

$$\Phi\_N(\mathbf{x}, \mathbf{v}, t) = \Pi(\mathbf{v}|\mathbf{x}, t) P\_N(\mathbf{x}, t), \tag{8.12}$$

where is the conditional velocity distribution when the particles positions are fixed at **x**. *PN (***x***)* corresponds to the distribution of the particles and is the marginalised distribution giving the distribution of positions of the particles regardless their velocities:

$$P\_N(\mathbf{x}, t) = \int d\mathbf{v} \, \Phi\_N(\mathbf{x}, \mathbf{v}, t). \tag{8.13}$$

In order to determine *PN* we integrate Eq. (8.11) with respect to all velocities and obtain the continuity equation relating the probability density *PN* and the probability current *Ji*

$$\frac{\partial P\_N(\mathbf{x},t)}{\partial t} + \sum\_{l} \frac{\partial J\_l(\mathbf{x},t)}{\partial \mathbf{x}\_l} = \mathbf{0},\tag{8.14}$$

where the current *Ji(***x***,t)* is the *dN*-dimensional vector:

$$J\_l(\mathbf{x}, t) = \int d\mathbf{v} \, v\_l \, \Phi\_N(\mathbf{x}, \mathbf{v}, t). \tag{8.15}$$

After multiplying Eq. (8.11) by *vi* and integrating over the dN velocities, we obtain the momentum balance equations

$$\frac{\partial J\_l(\mathbf{x},t)}{\partial t} + \sum\_k \frac{\partial p\_{lk}(\mathbf{x},t)}{\partial \mathbf{x}\_k} + \frac{1}{\chi \tau} \frac{\partial U}{\partial \mathbf{x}\_l} P\_N(\mathbf{x},t) = -\frac{1}{\tau} \sum\_k \Gamma\_{lk}(\mathbf{x},t) J\_k(\mathbf{x},t), \tag{8.16}$$

where *pik(***x***,t)* ≡ *d***v***vivjN* Eqs. (8.14) and (8.16) are an *N d* + 1 system which is not closed because one does not know the explicit form of the tensor *pik(***x***,t)* in terms of *PN* and *Ji*. A simplifying ansatz is to assume that the velocities have a local distribution similar to the one they would have in an equilibrium system; this is the following multivariate Gaussian distribution:

$$\Pi(\mathbf{v}|\mathbf{x}) \approx \left(\frac{\tau}{2\pi D}\right)^{N/2} \sqrt{\det \Gamma} \, \exp\left(-\frac{\tau}{2D} \sum\_{ij} v\_i \Gamma\_{ij}(\mathbf{x}) v\_j\right). \tag{8.17}$$

It is important to notice that the variance depends on the positions of the particles in contrast with equilibrium systems and is consistent with the fact that the friction is position dependent. Within the Gaussian ansatz we can rewrite the balance Eq. (8.16) as

$$\frac{\partial J\_l}{\partial t} + \frac{D}{\tau} \sum\_k \frac{\partial}{\partial \mathbf{x}\_k} \Gamma\_{lk}^{-1} P\_N + \frac{1}{\mathcal{V}\tau} \frac{\partial U}{\partial \mathbf{x}\_l} P\_N = -\frac{1}{\tau} \sum\_k \Gamma\_{lk} J\_k. \tag{8.18}$$

Finally, we assume that the time derivative of the current vanishes on a faster time scale than the time derivative of the density so that dropping the time derivative in Eq. (8.18) and expressing *Ji* in terms of *PN* in Eq. (8.14) we obtain Eq. (8.10)

$$\frac{\partial P\_N(\mathbf{x},t)}{\partial t} = D \sum\_{ij} \frac{\partial}{\partial x\_i} \Gamma\_{ij}^{-1}(\mathbf{x},t) \left( \sum\_k \frac{\partial}{\partial x\_k} \Gamma\_{jk}^{-1}(\mathbf{x},t) + \frac{1}{D\boldsymbol{\gamma}} \frac{\partial}{\partial x\_j} U(\mathbf{x}) \right) P\_N(\mathbf{x},t)$$

$$= -\sum\_i \frac{\partial}{\partial x\_i} J\_i(\mathbf{x},t). \tag{8.19}$$

The advantage of such a derivation is that we have obtained not only the distribution function of positions, but also the approximate form of the distribution of the velocities of the particles. The latter is peculiar because it depends on the positions of all the particles at variance with the equilibrium case.

# *8.2.2 Stationary Solution in the Absence of Current*

Let us consider the configurational distribution function *PN (***x***)* in the steady state associated with Eq. (8.10). In order to realise the steady state there are two possibilities, namely when the divergence of the probability flux vanishes, # *<sup>i</sup> ∂iJi* = 0, or when the flow, **J**, itself vanishes. Since only the configurational space is considered in such a reduced description and the positional variables, *xi*, are even under time-reversal transformation, the condition *Ji* = 0 for arbitrary *i* is equivalent to the detailed balance condition [14]. In detail, if the matrix −<sup>1</sup> is non-singular, *Ji* = 0 implies

$$D\sum\_{k} \frac{\partial}{\partial \mathbf{x}\_k} \Gamma\_{lk}^{-1}(\mathbf{x}) P\_N(\mathbf{x}) + \frac{1}{\mathcal{V}} \frac{\partial U}{\partial \mathbf{x}\_l} P\_N(\mathbf{x}) = 0,\tag{8.20}$$

which can be rewritten as:

$$\frac{\partial}{\partial \mathbf{x}\_l} P\_N(\mathbf{x}) = -\frac{1}{D\mathbf{y}} \sum\_k \Gamma\_{lk}(\mathbf{x}) \frac{\partial U(\mathbf{x})}{\partial \mathbf{x}\_k} P\_N(\mathbf{x}) + \frac{\partial}{\partial \mathbf{x}\_l} \ln \det \Gamma(\mathbf{x}) P\_N(\mathbf{x}). \tag{8.21}$$

The detailed balance implies a stronger condition than the one represented by having a stationary distribution, since it implies that there is no net flow of probability around any closed cycle of states. Such a situation is no longer true when we consider the phase-space probability density of the original GCN problem, as discussed in Appendix 2.

From the above equation, one can find an exact expression for the probability density, which reads

$$\begin{split} P\_N(\mathbf{x}) &= \frac{1}{Z\_N} \exp\left\{ -\frac{1}{D\mathcal{Y}} \left[ U(\mathbf{x}) + \frac{\tau}{2\mathcal{Y}} \sum\_{k}^N \left( \frac{\partial U(\mathbf{x})}{\partial x\_k} \right)^2 \right] \right. \\ &\left. + \ln \left| \det \left( \delta\_{lk} + \frac{\tau}{\mathcal{Y}} \frac{\partial^2 U(\mathbf{x})}{\partial x\_l \partial x\_k} \right) \right| \right\}, \end{split} \tag{8.22}$$

where *ZN* is a normalisation constant.

Such a formula, in principle, fully describes within the unified colour approximation the steady state distribution of a system of interacting particles subject to coloured noise. In the white-noise limit *τ* → 0 the formula reduces to the Boltzmann distribution corresponding to the potential *U*. For finite values of *τ* , instead, the distribution maintains a Boltzmann-like distribution but with the effective potential given by Eq. (8.22). The presence of the additional terms *∂U(***x***) ∂xk* <sup>2</sup> and ln det has repercussions in the form of the steady state configuration. Such a form of distribution is at a first glance surprising since in equilibrium systems, energy is exchanged reversibly with the environment and the form of *PN (x)* is determined by the potential and the temperature of the environment. On the contrary, in nonequilibrium systems, energy is exchanged irreversibly with the environment and in general there is no one-to-one correspondence between potential and *PN (x)*. The vanishing of all components of the probability current (see Eq. (8.20)) is tantamount of the existence of the detailed balance condition, i.e. of the microscopic reversibility in the dynamics of the active system. This is reflected in the Boltzmannlike form of the distribution function. One may ask whether this is an artefact of the UCNA treatment of the dynamics or is a genuine property of the system. As we shall discuss below, by considering an elementary case, the detailed balance condition is violated by the original GCN dynamics by terms proportional to the persistence time, *τ* .

For a total potential, *U*, consisting of the sum of purely repulsive pair potentials the overall result is to create a sort of effective attractive potential among the particles. The origin of such an attraction can be understood as follows: the drag force on each particle is determined by the bare friction with the solvent medium plus an additional contribution stemming from the interactions. The non-equilibrium force is an attraction between self-propelled particles causing them to cluster. In the case of *Ji* = 0 and # *<sup>i</sup> ∂iJi* = 0, it is not possible in general to obtain explicit solutions apart from some special cases which will be discussed later.

# *8.2.3 Fox Approximation*

The approximate treatment obtained by applying the UCNA method is not unique. An alternative method has been put forward by Fox [15] who employed functional calculus in order to derive the effective equation for the distribution function *P (***x***,t)* corresponding to the GCN model. The resulting equation of evolution is valid in the small *τ* regime and has been applied to active fluids by Farage et al. [16]. It reads

$$\begin{split} \frac{\partial P\_N^{f\alpha\mathbf{x}}(\mathbf{x},t)}{\partial t} &= D \sum\_{i} \frac{\partial}{\partial \mathbf{x}\_i} \Big( \sum\_{k} \frac{\partial}{\partial \mathbf{x}\_k} \Gamma\_{ik}^{-1}(\mathbf{x},t) P\_N^{f\alpha\mathbf{x}}(\mathbf{x},t) \\ &+ \frac{1}{D\mathcal{Y}} \frac{\partial}{\partial \mathbf{x}\_i} U(\mathbf{x}) P\_N^{f\alpha\mathbf{x}}(\mathbf{x},t) \Big). \end{split} \tag{8.23}$$

Interestingly, the Fox and the UCNA approaches in the case of a single coloured noise yield the same steady state distribution function, whereas the approach to such a solution is different in the two cases. In the case where the particles are subject to different types of noises, each characterised by its own relaxation time, the UCNA approximation does not give the correct equation of motion even in the small *τ* limit, whereas the Fox method correctly reproduces such a limit. Therefore, in order to describe mixtures of active particles or of passive and active particles it is convenient to apply Fox's approach in spite of the fact that it only describes the small *τ* regime [17, 18].

# *8.2.4 Entropy Production in UCNA*

The detailed balance requires that the probability of making a transition forward in time equals the probability of making the reverse transition, backward in time, when the system is in the steady state. It is easy to verify that within the UCNA approximation the condition of detailed balance holds if the probability current vanishes, **J** = 0 in the steady state. The vanishing of **J** implies the existence of an effective potential *U*eff which fully determines the distribution. We shall see in Sect. 8.7 that this is not the case when **J** = 0. This is the reason why the UCNA steady state distribution has a form similar to a Boltzmann distribution, although with an effective potential which depends on the persistence time.

A measure of the distance from thermodynamic equilibrium is provided by the entropy production, so that it is interesting to study such a quantity in the steady state of the UCNA evolution equation. To this purpose, let us consider the rate of change of the Shannon entropy (for the sake of simplicity we study the case with *N* = 1 of Eq. (8.10)).

$$\begin{split} \dot{S}(t) &= -\int d\mathbf{x} \frac{\partial}{\partial t} P(\mathbf{x}, t) \ln P(\mathbf{x}, t) \\ &= \int d\mathbf{x} \ln P(\mathbf{x}, t) \,\nabla J(\mathbf{x}, t) = -\int d\mathbf{x} \frac{\nabla P}{P} J, \end{split} \tag{8.24}$$

where we obtained the last equality by partial integration. We decompose *S*˙ into two contributions:

$$
\dot{\mathcal{S}} = \dot{\mathcal{S}}\_s + \dot{\mathcal{S}}\_m,
$$

where *S*˙ *<sup>s</sup>* is the entropy production due to irreversible processes occurring inside the system and *S*˙ *<sup>m</sup>* is the entropy flux from the environment to the system. We shall show that *S*˙ *<sup>s</sup>* is positive definite, whereas *Sm* can have either sign. In the steady state the rate of change of the entropy vanishes so that *S*˙ *<sup>m</sup>* = −*S*˙ *<sup>s</sup>*. From Eq. (8.19) for *N* = 1 we have

$$\frac{\partial P(\mathbf{x},t)}{\partial t} = \nabla \left[ \frac{D}{\Gamma(\mathbf{x},t)} \left( \nabla \frac{P(\mathbf{x},t)}{\Gamma(\mathbf{x},t)} + \frac{1}{D\mathbf{y}} \nabla U(\mathbf{x}) P(\mathbf{x},t) \right) \right] \tag{8.25}$$

with the following probability current:

$$J(\mathbf{x}, t) = -\frac{D}{\Gamma(\mathbf{x}, t)} \left( \nabla \frac{P(\mathbf{x}, t)}{\Gamma(\mathbf{x}, t)} + \frac{1}{D\boldsymbol{\gamma}} \nabla U P(\mathbf{x}, t) \right),$$

where

$$
\Gamma(\mathbf{x}) = 1 + \frac{\pi}{\mathcal{Y}} \nabla^2 U(\mathbf{x}). \tag{8.26}
$$

We can eliminate ∇*P* from Eq. (8.24) and obtain the following expression in terms of the current:

$$\dot{S}(t) = \int d\mathbf{x} \frac{\left(\Gamma(\mathbf{x}, t)J(\mathbf{x}, t)\right)^2}{DP(\mathbf{x}, t)} + \int d\mathbf{x} J(\mathbf{x}, t) \frac{\nabla U\_{\text{eff}}(\mathbf{x}, t)}{D\chi},\tag{8.27}$$

with

$$U\_{\rm eff}(\chi) = U + \frac{\pi}{2\chi} (\nabla U)^2 - D\chi \ln \Gamma.$$

We now identify the first term in Eq. (8.27)

$$\dot{S}\_s(t) = \frac{1}{D} \int d\mathbf{x} \frac{\Gamma^2(\mathbf{x}, t) J^2(\mathbf{x}, t)}{P(\mathbf{x}, t)} \tag{8.28}$$

as an entropy production rate always non-negative, and the second term

$$\dot{S}\_m(t) = \frac{1}{T} \int d\mathbf{x} J(\mathbf{x}, t) \,\nabla U\_{\text{eff}}(\mathbf{x}, t) \tag{8.29}$$

with the entropy flux due to heat exchanges between the system and the surroundings and the temperature *T* = *Dγ* .

We identify the heat flux with the average change of effective potential energy, *U*eff, of the system in the unit time evaluated as follows:

$$\begin{split} \langle \dot{Q}(t) \rangle &= \frac{d}{dt} \int d\mathbf{x} \, U\_{\text{eff}}(\mathbf{x}) P(\mathbf{x}, t) \\ &= \int d\mathbf{x} \, U\_{\text{eff}}(\mathbf{x}) \dot{P}(\mathbf{x}, t) = -\int d\mathbf{x} \, U\_{\text{eff}}(\mathbf{x}) \nabla J(\mathbf{x}, t) .\end{split} \tag{8.30}$$

After an integration by parts we obtain

$$
\langle \dot{Q}(t) \rangle = \int d\mathbf{x} \, J(\mathbf{x}, t) \, \nabla U\_{\mathrm{eff}}(\mathbf{x}), \tag{8.31}
$$

and by comparing Eqs. (8.31) and (8.29), we find the following relation:

$$
\dot{S}\_m(t) = \frac{1}{T} \langle \dot{Q}(t) \rangle. \tag{8.32}
$$

Finally, we have

$$\dot{S}(t) = \frac{1}{T} \langle \dot{Q}(t) \rangle + \frac{1}{D} \int d\mathbf{x} \frac{\Gamma^2(\mathbf{x}, t) J^2(\mathbf{x}, t)}{P(\mathbf{x}, t)}. \tag{8.33}$$

Notice that now the temperature entering the formula connecting *S*˙ *<sup>m</sup>* and *Q*˙ is uniform and given by *T* .

Let us remark that due to the detailed balance condition both *S*˙ *<sup>s</sup>* and *S*˙ *<sup>m</sup>* vanish in the steady state UCNA, showing that the UCNA method maps the underlying GCN non-equilibrium description into an equilibrium one. At variance with the UCNA, in the GCN both *S*˙ *<sup>s</sup>* and *S*˙ *<sup>m</sup>* are non-vanishing in the steady state.

# *8.2.5 H-Theorem*

The following calculation proves the approach to the stationary distribution in terms of the entropy functional. One sees immediately that the entropy flux

$$\dot{S}\_m(t) = -\int d\mathbf{x} J(\mathbf{x}, t) \,\nabla \ln P\_{\text{steady}}(\mathbf{x}),\tag{8.34}$$

so that using Eq. (8.24) we can rewrite

$$\begin{split} \dot{S}\_s(t) = \dot{S}(t) - \dot{S}\_m(t) &= -\int dx J(\mathbf{x}, t) \, \nabla \ln \frac{P(\mathbf{x}, t)}{P\_{\text{steady}}(\mathbf{x})} \\ &= -\int dx \ln \frac{P(\mathbf{x}, t)}{P\_{\text{steady}}(\mathbf{x})} \frac{\partial P(\mathbf{x}, t)}{\partial t} .\end{split} \tag{8.35}$$

The quantity *S*˙ *<sup>s</sup>* is nothing else but the rate of change of the Kullback–Leibler entropy, *SKL* ≡ − *dxP (x, t)*ln*(P (x, t)/P*steady*(x))*, which is positive due to the sign of *S*˙ *<sup>s</sup>* and vanishes at equilibrium

$$
\dot{S}\_{KL}(t) = \dot{S}\_{\text{s}}(t) \ge 0. \tag{8.36}
$$

Thus the Kullback–Leibler entropy of the UCNA process is an ever increasing function and satisfies an H-theorem. The relative entropy *SKL(t)* is a functional of the non-equilibrium probability distribution and generalises the ordinary thermodynamic entropy which is defined for equilibrium states.

# **8.3 Born–Green–Yvon Hierarchy in the Steady State**

We go back now to the multidimensional case and adopt indices to specify components and particles. We focus attention on the steady state properties of the system as described by the UCNA. We must remark that formula, Eq. (8.22) refers to *N* particles and therefore is not of practical use when the particles are mutually interacting. We need to derive from it expressions for the one-body and two-body distribution functions. The procedure is similar to the one employed in equilibrium statistical mechanics. We shall use the steady condition, Eq. (8.20), to derive a set of equations similar to the BGY hierarchy for distribution functions in equilibrium systems. The hierarchy becomes of practical utility in conjunction with a suitable truncation scheme in order to eliminate the dependence on the higher order correlations.

In the following, the Cartesian components (from 1 to *d*) are identified by the indices *α* and *β*, and the particles are identified by Latin indices. The total potential is assumed to be the sum of the mutual pairwise interactions *w(***r** − **r** *)* between the # particles and of the potential exerted by the external field *u(***r***)*: *U (***r**1*,...,* **<sup>r</sup>***<sup>N</sup> )* <sup>=</sup> *<sup>N</sup> i>j w(***r***i,* **<sup>r</sup>***<sup>j</sup> )* <sup>+</sup> #*<sup>N</sup> <sup>i</sup> u(***r***i)*.

The hierarchy follows from Eq. (8.10) and considering the reduced probability distribution functions of order *n*:

$$P\_N^{(n)}(\mathbf{r}\_1, \mathbf{r}\_2, \dots, \mathbf{r}\_n) \equiv \int d\mathbf{r}\_{n+1} \dots d\mathbf{r}\_N P\_N(\mathbf{r}\_1, \mathbf{r}\_2, \dots, \mathbf{r}\_N). \tag{8.37}$$

When we integrate Eq. (8.10) over *(N* −2*)* coordinates we obtain an equation which relates the two-body marginal distribution *P(*2*) <sup>N</sup> (***r**1*,* **r**2*)* to marginal distributions of different orders

$$\begin{split} & -T \int \int d\mathbf{r}\_{3} \dots d\mathbf{r}\_{N} \sum\_{\beta} \sum\_{n} \frac{\partial}{\partial r\_{\beta n}} [\Gamma^{-1}\_{\alpha 1, \beta n}(\mathbf{r}\_{1}, \dots, \mathbf{r}\_{N}) P\_{N}(\mathbf{r}\_{1}, \dots, \mathbf{r}\_{N})] \\ & = P\_{N}^{(2)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}) \Big( \frac{\partial u(\mathbf{r}\_{\alpha 1})}{\partial r\_{\alpha 1}} + \frac{\partial w(\mathbf{r}\_{1} - \mathbf{r}\_{2})}{\partial r\_{\alpha 1}} \Big) \\ & + \sum\_{k > 2} \int d\mathbf{r}\_{k} P\_{N}^{(3)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}, \mathbf{r}\_{k}) \frac{\partial w(\mathbf{r}\_{1} - \mathbf{r}\_{k})}{\partial r\_{\alpha 1}}. \end{split} \tag{8.38}$$

The equation for *P(*2*) <sup>N</sup>* has a structure similar to that of a standard equilibrium gas but for the term containing −<sup>1</sup> *<sup>α</sup>*1*,βn* and unless we introduce approximations it is of little practical use.

We write

$$\Gamma\_{\alpha i, \beta k} = \left(\delta\_{\alpha \beta} + \frac{\tau}{\nu} u\_{\alpha \beta}(\mathbf{r}\_i) + \frac{\tau}{\nu} \sum\_j w\_{\alpha \beta}(\mathbf{r}\_i - \mathbf{r}\_j)\right) \delta\_{ik} - \frac{\tau}{\nu} w\_{\alpha \beta}(\mathbf{r}\_i - \mathbf{r}\_k)(1 - \delta\_{ik}),\tag{8.39}$$

and we remark that in the limit of small *(τ/γ )* the matrix −<sup>1</sup> *<sup>α</sup>*1*,βn* can be approximated as [19]:

$$
\Gamma^{-1}\_{\alpha l, \beta k} \approx \left( \delta\_{\alpha \beta} - \frac{\tau}{\nu} \mu\_{\alpha \beta}(\mathbf{r}\_l) - \frac{\tau}{\nu} \sum\_{j \neq l} w\_{\alpha \beta}(\mathbf{r}\_l - \mathbf{r}\_j) \right) \delta\_{lk} = \tilde{\Gamma}^{-1}\_{\alpha \beta}(\mathbf{r}\_l) \delta\_{lk}, \qquad (8.40)
$$

where *uαβ* <sup>≡</sup> *<sup>∂</sup>*2*u(***r***) ∂rα∂rβ* and *wαβ* <sup>≡</sup> *<sup>∂</sup>*2*w(***r***) ∂rα∂rβ* . We substitute this approximation and recast Eq. (8.38) in terms of the *<sup>n</sup>*-th order density distributions *<sup>ρ</sup>(n)(***r**1*,* **<sup>r</sup>**2*,...,* **<sup>r</sup>***n)* <sup>=</sup> *<sup>N</sup>*! *(N*−*n)*! *P(n) <sup>N</sup> (***r**1*,* **r**2*,...,* **r***n)* and find

$$\begin{split} \quad &T \sum\_{\beta} \frac{\partial}{\partial r\_{\beta 1}} \Big[ \rho^{(2)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}) \delta\_{a\beta} - \frac{\tau}{\nu} \Big( \rho^{(2)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}) u\_{a\beta}(\mathbf{r}\_{1}) + \rho^{(2)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}) w\_{a\beta}(\mathbf{r}\_{1} - \mathbf{r}\_{2}) \\ &+ \int d\mathbf{r}\_{k} \rho^{(3)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}, \mathbf{r}\_{k}) w\_{a\beta}(\mathbf{r}\_{1} - \mathbf{r}\_{k}) \Big) \Big] \\ &= -\rho^{(2)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}) \Big( \frac{\partial u(\mathbf{r}\_{1})}{\partial r\_{a1}} + \frac{\partial w(\mathbf{r}\_{1} - \mathbf{r}\_{2})}{\partial r\_{a1}} \Big) - \int d\mathbf{r}\_{k} \rho^{(3)}(\mathbf{r}\_{1}, \mathbf{r}\_{2}, \mathbf{r}\_{k}) \frac{\partial w(\mathbf{r}\_{1} - \mathbf{r}\_{k})}{\partial r\_{a1}}, \end{split} \tag{8.41}$$

which represents the BGY equation for the pair density distribution *ρ(*2*)* . By integrating also over the coordinate 2 we find the BGY equation for the one-body density:

$$\begin{split} &T\_s \sum\_{\beta} \frac{\partial}{\partial r\_{\beta 1}} \Big[ \delta\_{a\beta} \rho^{(1)}(\mathbf{r}\_1) - \frac{\tau}{\mathcal{V}} \rho^{(1)}(\mathbf{r}\_1) u\_{a\beta}(\mathbf{r}\_1) \\ & \qquad \qquad \qquad \qquad - \frac{\tau}{\mathcal{V}} \int d\mathbf{r}\_2 \rho^{(2)}(\mathbf{r}\_1, \mathbf{r}\_2) \, w\_{a\beta}(\mathbf{r}\_1 - \mathbf{r}\_2) \Big] \\ &= -\rho^{(1)}(\mathbf{r}\_1) \frac{\partial u(\mathbf{r}\_1)}{\partial r\_{a1}} - \int d\mathbf{r}\_2 \rho^{(2)}(\mathbf{r}\_1, \mathbf{r}\_2) \frac{\partial w(\mathbf{r}\_1 - \mathbf{r}\_2)}{\partial r\_{a1}}, \end{split} \tag{8.42}$$

that in the limit of *τ* → 0 is just the BGY equation for the single-particle distribution function.

The r.h.s. of Eq. (8.42) contains the coupling to the external field and the socalled direct interaction among the particles, whereas the l.h.s. besides the ideal gas term contains a term proportional to the activity parameter.

# **8.4 Active Pressure**

A natural way to define the pressure in a system of active spherical particles driven by coloured noise is by using the virial theorem which relates the virial of the external forces confining the particles in a given volume to the pressure exerted on the walls by the particles themselves. The forces exerted by the bounding walls of the container are macroscopically described as external pressure [20, 21]. Each oriented area element *d***S** exerts a force −*p(***r***)d***S** so that

$$\sum\_{l}^{N} \langle \mathbf{F}\_{l}^{\rm ext} \cdot \mathbf{r}\_{l} \rangle = - \oint p(\mathbf{r}) \mathbf{r} \cdot d\mathbf{S} = -\bar{p}Vd,\tag{8.43}$$

where *p*¯ is the average pressure over the boundary surface, **r** is the position vector of the surface element and the last equality follows from the divergence theorem (∇ · **r** = *d*).

Now, in order to evaluate the external force virial, Eq. (8.43), we multiply Eq. (8.38) by *rα*1, integrate over **r**<sup>1</sup> and **r**<sup>2</sup> and sum over indices. After an integration by parts we obtain the following equation:

$$\sum\_{l}^{N} \langle (\mathbf{F}\_{l}^{\mathrm{ext}} + \mathbf{F}\_{l}^{\mathrm{int}}) \cdot \mathbf{r}\_{l} \rangle + T \sum\_{\alpha l} \langle \Gamma\_{\alpha l, \alpha l}^{-1} \rangle = 0,\tag{8.44}$$

where the forces are separated in two parts: wall and interparticle forces, *F*ext *αi* = <sup>−</sup>*∂u(***r**1*) ∂rα*<sup>1</sup> and *<sup>F</sup>*int *αi* = −# *k ∂w(***r**1−**r***<sup>k</sup> ) ∂rα*<sup>1</sup> , respectively, and the symbol · stands for an average over the stationary distribution *PN* .

In the case where the confining vessel has constant curvature one finds: *p*¯ = *p(***r***)*. In general, when the linear size of the vessel is much larger than the persistence length, the standard virial definition of pressure based on the assumption of the constancy of the pressure on the boundary of the system is correct [20, 21]. In order to obtain a closed expression for the pressure, we write the term stemming from the internal forces as:

$$\sum\_{i}^{N} \langle \mathbf{F}\_{i}^{\mathrm{int}} \cdot \mathbf{r}\_{i} \rangle = \frac{1}{2} \sum\_{i} \sum\_{j}^{\prime} \langle \mathbf{F}\_{ij} \cdot (\mathbf{r}\_{i} - \mathbf{r}\_{j}) \rangle,\tag{8.45}$$

and approximate the average of the trace of −<sup>1</sup> as:

$$T\sum\_{i}^{N}\sum\_{\alpha}^{d}\langle\Gamma\_{\alpha i,\alpha i}^{-1}\rangle \approx T\_s\sum\_{\alpha}^{d}\int d\mathbf{r}\,\tilde{\Gamma}\_{\alpha\alpha}^{-1}(\mathbf{r})\rho^{(\mathbf{l})}(\mathbf{r}),\tag{8.46}$$

where in the second equality we have used Eq. (8.40). We now write

$$\begin{split} p\_{\boldsymbol{v}} &= \frac{T}{dV} \sum\_{\alpha}^{d} \int d\mathbf{r} \,\tilde{\Gamma}\_{\alpha \mu}^{-1}(\mathbf{r}) \rho^{(1)}(\mathbf{r}) \\ &- \frac{1}{2dV} \sum\_{\alpha}^{d} \int d\mathbf{r} \, \int d\mathbf{r}' (r\_{\alpha} - r\_{\alpha}') \rho^{(2)}(\mathbf{r}, \mathbf{r}') \frac{\partial w(\mathbf{r} - \mathbf{r}')}{\partial r\_{\alpha}}, \end{split} \tag{8.47}$$

where the second term in Eq. (8.47) is analogous to the direct contribution to the pressure in passive fluids stemming from interactions. Finally, we obtain the explicit representation

$$\begin{split} \frac{T}{dV} \sum\_{\alpha}^{d} \int d\mathbf{r} \,\tilde{\Gamma}^{-1}\_{\alpha\alpha}(\mathbf{r}) \rho^{(1)}(\mathbf{r}) &\approx \frac{T}{V} \Big[ N - \frac{1}{d} \frac{\tau}{\mathcal{Y}} \int d\mathbf{r} \sum\_{\alpha} u\_{\alpha\alpha}(\mathbf{r}) \rho^{(1)}(\mathbf{r}) \\ & - \frac{1}{d} \frac{\tau}{\mathcal{Y}} \int d\mathbf{r} \int d\mathbf{r}' \rho^{(2)}(\mathbf{r}, \mathbf{r}') \sum\_{\alpha} w\_{\alpha\alpha}(\mathbf{r} - \mathbf{r}') \Big]. \end{split} \tag{8.48}$$

The first term in the r.h.s. of Eq. (8.48) represents an ideal gas-like contribution to the pressure, *TN/V* , also referred to as the swim pressure, due to the rotational degrees of freedom. The second and third term in Eq. (8.48) represent indirect interaction contributions, and take into account the slowing down of active fluids near a boundary and in regions of high density, respectively. The indirect interaction pressure involves the interplay between the rotational degrees of freedom and the interparticle forces and is a non-equilibrium effect. In fact, in the limit of *τ* → 0 the quantity Eq. (8.48) reduces to *TtN/V* , the ideal gas contribution to the pressure.

Besides the virial method, for the UCNA there exist two other approaches to evaluate not only the pressure but also the surface tension. In the first of approach, these can be identified with the volume and area derivatives, respectively, of the partition function associated with the stationary non-equilibrium distribution. The second alternative method is a mechanical approach and is related to the work necessary to deform the system. The pressure is obtained by comparing the expression of the work in terms of local stress and strain with the corresponding expression in terms of microscopic distribution. This work is determined from the force balance encoded in the Born–Green–Yvon equation and can be used to obtain a formula for the local pressure tensor and the surface tension even in inhomogeneous situations. Nicely, the three procedures lead to the same values of the pressure, and give support to the idea that the UCNA partition function is more than a formal property of the system, but determines the stationary non-equilibrium thermodynamics of the model. For further details the reader may consult ref. [9].

# **8.5 Velocity Correlations**

The kinetic derivation of the UCNA has shown that a system of active particles displays velocity correlations. Within the present treatment these correlations have been approximated by means of a Gaussian multivariate distribution whose variance depends on the potential.

We consider *N* interacting particles in 1*d*. We perform numerical simulations of systems with *N* = 1000 composed by GCN-driven particles interacting via the potential *φ(***x***)* = # *i>j (xi* <sup>−</sup> *xj )*−<sup>12</sup> for several values of the density *<sup>ρ</sup>* <sup>=</sup> *N/L*, of *D* and *τ* .

The velocity variance depends on the configuration of the particles, so that by averaging it over the positions we obtain the overall velocity variance of a GCNdriven system:

$$
\langle \psi^2 \rangle = \frac{1}{dN} \int d\mathbf{x} \, P\_N(\mathbf{x}) \int d\mathbf{v} \, \mathbf{v} \cdot \mathbf{v} \, \Pi(\mathbf{v}|\mathbf{x}).\tag{8.49}
$$

The *v*2 computed numerically via Eq. (8.49) is plotted in Fig. 8.1b (full lines) as a function of the 1*d* density *ρ* = *N/L* of the system and for several values of *D* (at fixed *<sup>τ</sup>* ). In all these simulations we compute the variance ˙*x*2 and report the results in Fig. 8.1 as connected symbols.

To test the validity of the Gaussian ansatz for the velocity distribution given by Eq. (8.17), we compute the average over positions in Eq. (8.49) directly from the coordinates obtained numerically, instead of using the theoretical *PN* of Eq. (8.22). This is plotted in Fig. 8.1 as dashed lines and follows well the numerical curves, although some expected deviation [7] is observed upon increasing *D* to very high values. If we assume a uniform density and long-ranged interactions (mean-field approximation) the velocity distribution Eq. (8.17) simplifies substantially since all the out-of-diagonal term of *∂α∂βw(***r***<sup>i</sup>* − **r***<sup>j</sup> )* are of order one and can be neglected with respect to the terms on the main diagonal that are of order *N* [19]. This yields

**Fig. 8.1** Normalised velocity variance for a 1*d* system of many interacting active particles. Symbols are the results of numerical simulations for several values of *τ* and *D* (see legend). Dashed lines are the theoretical velocity variances obtained by averaging Eq. (8.49) over the coordinates obtained numerically. Thick lines are the result of a homogeneous density approximation. Dasheddotted lines represent the small-*τ* approximation connecting the variance to the pair distribution function. Dotted lines are the velocity variances obtained by mapping the system onto a harmonic model (see ref. [22])

the density-dependent variance:

$$\frac{\langle v^2(\rho) \rangle}{(D/\tau)} = \frac{1}{1 + \frac{\varepsilon}{\mathcal{Y}} w\_2 \rho} = \frac{1}{1 + \rho \mathcal{L}},\tag{8.50}$$

where *w*<sup>2</sup> = <sup>∞</sup> *<sup>σ</sup> dx w(x)* is the mean potential curvature integrated from the diameter *<sup>σ</sup>* and <sup>L</sup> <sup>=</sup> <sup>√</sup>*Dτ* is the characteristic length of the active motion. In the last equality of Eq. (8.50) we have used the fact that, for a generic repulsive potential, *σ* corresponds roughly to the distance where the interaction force balances the selfpropulsion force (i.e. |*w (σ )*| ≈ *γ v* = *γ* <sup>√</sup>*D/τ* ) and *<sup>w</sup>*<sup>2</sup> <sup>=</sup> *<sup>w</sup> (*∞*)* − *w (σ )* ≈ *γ* <sup>√</sup>*D/τ* . This is plotted in Fig. 8.1 as a thick line for the largest *<sup>D</sup>* and follows well the data when L is large. To first order in *τ* we obtain the results plotted as dasheddotted lines in Fig. 8.1 and the theory compares well with the numerical simulations. However, by fixing *τ* and increasing *D* this approximation deviates strongly from the simulations. We can now derive an expression for pressure by a kinetic argument by identifying it with

$$p = \rho \langle v^2(\rho) \rangle = \frac{D}{\pi} \frac{\rho}{1 + \rho \mathcal{L}}.\tag{8.51}$$

# **8.6 Simple Applications**

# *8.6.1 Active Elastic Dumbbells*

Let us consider *N* mutually noninteracting elastic dumbbells, i.e. two point particles bound together by an elastic spring of constant *α*2, moving in a vessel represented by a harmonic weak confining potential, of spring constant *ω*<sup>2</sup> [23, 24]. Such a model, similar to the harmonic trap model [25–27], was proposed long ago by Riddell and Uhlenbeck. It contains the minimal ingredients to observe the competition between internal forces and confining potential and can be solved without introducing further approximations. The potential energy reads [28]:

$$U(\mathbf{r}\_1, \mathbf{r}\_2) = w(\mathbf{r}\_1 - \mathbf{r}\_2) + \mu(\mathbf{r}\_1) + \mu(\mathbf{r}\_2)$$

with *w(***r***)* <sup>=</sup> <sup>1</sup> <sup>2</sup>*α*2**r**2. By setting *u(***r***)* <sup>=</sup> *<sup>k</sup>* 2 **r**2 *<sup>L</sup>*<sup>2</sup> , one introduces a volume dependence in the spring constant associated with the confining potential and for simplicity of notation we shall use *<sup>ω</sup>*<sup>2</sup> <sup>=</sup> *<sup>k</sup> L*2 .

The virial pressure is obtained by applying the general formula Eq. (8.47)

$$pdL^d = -\langle \mathbf{F}\_1^{\text{ext}} \cdot \mathbf{r}\_1 + \mathbf{F}\_2^{\text{ext}} \cdot \mathbf{r}\_2 \rangle = dT \langle \Gamma\_{11}^{-1} + \Gamma\_{22}^{-1} \rangle + \langle \mathbf{F}\_{12} \cdot (\mathbf{r}\_1 - \mathbf{r}\_2) \rangle. \tag{8.52}$$

By simple algebraic manipulations we find

$$p = \frac{T}{L^d} \left[ \frac{1}{1 + \frac{\varepsilon}{\gamma} \omega^2} + \frac{\omega^2}{\omega^2 + 2\alpha^2} \frac{1}{(1 + \frac{\varepsilon}{\gamma}(\omega^2 + 2\alpha^2))} \right]. \tag{8.53}$$

In the limit of *τ* → 0 and *α* → 0 the pressure reduces to the expected ideal gas pressure of a system of 2*N* noninteracting particles in a vessel of volume *L<sup>d</sup>* . On the other hand, one can see that the pressure decreases with increasing values of *τ* , i.e. if the persistence length <sup>L</sup> <sup>=</sup> <sup>√</sup>*Dτ* exceeds the typical size of the vessel the particles do not explore the whole space of the vessel, but remain localised at the bottom.

# *8.6.2 Pressure of N Noninteracting Active Particles Surrounded by Harshly Repulsive Walls*

As a second example we consider an assembly of *N* noninteracting active particles constrained in a region of space near the origin by a spherically symmetric external potential in three dimensions. Using Eq. (8.42) one can derive the following exact formula expressing the mechanical balance condition:

$$\frac{d}{dr}p\_N(r) + \frac{2}{r}(p\_N(r) - p\_T(r)) = -\rho^{(\text{l})}(r)u'(r),\tag{8.54}$$

where the components of the pressure tensor normal (N) and tangential (T) to the walls are

### 8 Active Fluids Within the Unified Coloured Noise Approximation 257

$$p\_N(r) = T\_s \frac{\rho^{(\rm l)}(r)}{1 + \frac{r}{\gamma}u''(r)},\tag{8.55}$$

and

$$p\_T(r) = T\_s \frac{\rho^{(l)}(r)}{1 + \frac{r}{\chi} \frac{u'(r)}{r}}.\tag{8.56}$$

The density profile according to Eq. (8.22) can be written explicitly as:

$$\rho^{(1)}(r) = \rho\_0 \exp\left[-\frac{u(r)}{T\_s} - \frac{\tau}{2\gamma T\_s}(u'(r))^2\right] \left(1 + \frac{\tau}{\gamma}u''(r)\right) \left(1 + \frac{\tau}{\gamma}\frac{u'(r)}{r}\right)^2,\tag{8.57}$$

so that we can fully determine the components of the pressure tensor.

# **8.7 Active Particles in a Time-Dependent Potential**

The results presented in the previous sections concern static cases, where the external potential is constant in time. In this section we address the interesting issue of a time-dependent external potential [29]. In particular, we shall consider a shifting potential *U (x, t)* = *U (x* − *ct)* in one dimension, moving at constant speed *c* and inducing a stationary current in the system. The potential barrier interacts with a fluid of active particle, see the sketch depicted in Fig. 8.2. The effect of a moving potential on a particle fluid is a general problem in modelling the motion of a driven obstacle in a medium, in several different fields, such as in the active microrheology of colloidal systems or in the translocation dynamics of polymer chains through nanopores.

**Fig. 8.2** Sketch of the system: a potential barrier moves at constant velocity *c* in a channel with a noninteracting active particle fluid, producing a density profile which is non-uniform along the *x* direction [29]

In the case of the GCN model, we will show that the coupling of self-propulsion (namely a finite persistence time *τ* ) with the stationary current gives place to an effective dynamical potential, which vanishes in both the limits of *c* → 0, and *τ* → 0 (passive particles). The main physical effect we observe in this model, which is accounted for by a generalised UCNA scheme, consists in a much enhanced accumulation of active particles at the interface fluid/obstacle, with respect to the static case or with respect to the behaviour of a passive particle fluid.

# *8.7.1 Effective Potential*

As shown in Sect. 8.3, the first step in deriving UCNA equations is to take the time derivative in Eq. (8.3). For the sake of simplicity, we consider a one-dimensional system and a shifting time-dependent potential of the form *U (x, t)* = *U (x* − *ct)*. We have

$$
\dot{\mathbf{x}}(t) = \mathbf{v}(t),\tag{8.58}
$$

$$\dot{v}(t) = -\frac{1}{\tau}\Gamma(\mathbf{x} - ct)v(t) + \frac{1}{\tau\chi}F^\*(\mathbf{x} - ct) + \frac{\sqrt{D}}{\tau}\eta(t),\tag{8.59}$$

$$F^\*(\mathbf{x}) = F(\mathbf{x}) - \tau c \frac{dF(\mathbf{x})}{d\mathbf{x}} = -\frac{dU(\mathbf{x})}{d\mathbf{x}} + \tau c \frac{d^2U(\mathbf{x})}{d\mathbf{x}^2},\tag{8.60}$$

$$
\Gamma(\mathbf{x}) = 1 + \frac{\pi}{\nu} \frac{d^2 U(\mathbf{x})}{dx^2}. \tag{8.61}
$$

By comparing with Eq. (8.6) we note that a new term appears in Eqs. (8.61): an effective force *F*∗*(x)*, which reduces to −*dU/dx* when *c* = 0. As we shall show, this additional contribution in the force term due to the finite velocity of the obstacle *c >* 0 is responsible for new dynamical effects.

# *8.7.2 Dynamical UCNA and Particle Density Profile*

In order to show how these effects can be described within a generalised UCNA scheme, it is useful to consider the associated Fokker–Planck equation. It is time saving to adopt non-dimensional variables for positions, velocities and time, and rescale forces accordingly. We define *vT* <sup>=</sup> <sup>√</sup>*D/τ* , measure lengths using the characteristic length, , of the potential, and introduce the following nondimensional variables:

$$
\bar{t} \equiv t \frac{\upsilon\_T}{\ell}, \quad \bar{v} \equiv \frac{v}{\upsilon\_T}, \quad \bar{x} \equiv \frac{x}{\ell}, \quad \bar{F}(\bar{x}, \bar{t}) \equiv \frac{\ell F(x, t)}{D\chi},
$$

#### 8 Active Fluids Within the Unified Coloured Noise Approximation 259

$$
\zeta = \frac{\ell}{\tau \upsilon\_T}, \quad \bar{\Phi} = \upsilon\_T \ell \,\,\Phi, \quad \bar{c} = \frac{c}{\upsilon\_T},
$$

where *ζ* plays the role of a non-dimensional friction. To lighten the notation we shall drop the bar over the non-dimensional variables without incurring in ambiguities. For the probability distribution of position and velocity *(y, v)* we thus obtain

$$\frac{\partial}{\partial t}\Phi(\mathbf{y},\upsilon) + v\frac{\partial}{\partial \mathbf{y}}\Phi(\mathbf{y},\upsilon) + F^\*(\mathbf{y})\frac{\partial}{\partial \upsilon}\Phi(\mathbf{y},\upsilon) = \xi \frac{\partial}{\partial \upsilon} \left[\frac{\partial}{\partial \upsilon} + \Gamma(\mathbf{y})\upsilon\right] \Phi(\mathbf{y},\upsilon),\tag{8.63}$$

where we have introduced the shifted variable *y* = *x* − *ct*. We, now, look for an approximate solution to this equation. We start by eliminating the *v* dependence of the phase-space distribution *(y, v)*, by multiplying by powers of *v* and integrating w.r.t. *v*. Thus, one obtains a set of coupled first order ordinary differential equations, the so-called Brinkman hierarchy, whose first two members are the continuity equation and the momentum balance equation, respectively:

$$-c\frac{d\rho(\mathbf{y})}{d\mathbf{y}} + \frac{dJ(\mathbf{y})}{d\mathbf{y}} = 0,\tag{8.64}$$

$$-c\frac{dJ(\mathbf{y})}{d\mathbf{y}} + \frac{d\Pi(\mathbf{y})}{d\mathbf{y}} - F^\*(\mathbf{y})\rho(\mathbf{y}) + \xi\Gamma(\mathbf{y})J(\mathbf{y}) = 0. \tag{8.65}$$

Here we have introduced the density *ρ(y)*, the current *J (y)* and the momentum current *(y)*, defined as:

$$
\rho(\mathbf{y}) = \int dv \Phi(\mathbf{y}, v),
\tag{8.66}
$$

$$J(\mathbf{y}) = \int dv v \Phi(\mathbf{y}, v),\tag{8.67}$$

$$
\Pi(\mathbf{y}) = \int dv v^2 \Phi(\mathbf{y}, v). \tag{8.68}
$$

According to the continuity Eq. (8.64) the current must be a linear function of the density, yielding

$$J(\mathbf{y}) = c[\rho(\mathbf{y}) - \bar{\rho}],\tag{8.69}$$

where *ρ*¯ is a constant such that the solution is periodic at *ρ(L)* = *ρ(*−*L)*, where 2*L* is the system size. As we shall see later, for large systems *L l*, *ρ*¯ ≈ *ρ(*±*L)* and the current is almost vanishing at the boundaries.

From the analysis of the case of static potentials, discussed in the previous sections, we know that the solution of Eq. (8.63) in regions where *F*∗*(y)* = 0 and *(y)* = 1 can be written as (see Eq. (8.17)):

(8.62)

$$\Phi(\mathbf{y}, \upsilon) = \left[\rho(\mathbf{y}) - \tilde{\rho}\right] H\_0(\upsilon - c) + \tilde{\rho} \, H\_0(\upsilon), \tag{8.70}$$

where

$$H\_0(v) = \sqrt{\frac{1}{2\pi}} \exp\left(-\frac{1}{2}v^2\right) \tag{8.71}$$

is a Hermite function of zero order. Indeed, by substituting the form Eq. (8.70) in Eq. (8.63) (with *F*<sup>∗</sup> = 0), we obtain a solution provided *ρ(y)* satisfies the following condition:

$$\frac{d\rho(\mathbf{y})}{d\mathbf{y}} = -\zeta c[\rho(\mathbf{y}) - \bar{\rho}].\tag{8.72}$$

Next, we insist in looking for a solution of Eq. (8.63) even in the region where *F*∗*(y)* = 0 of the form:

$$\Phi(\mathbf{y}, \upsilon) = \left[\rho(\mathbf{y}) - \tilde{\rho}\right] H\_0(\mathbf{y}, \upsilon - c) + \tilde{\rho} \, H\_0(\mathbf{y}, \upsilon), \tag{8.73}$$

where we have introduced the following (non-uniform) Hermite function, which is position dependent through the trial function *β(y)*:

$$H\_0(\mathbf{y}, \upsilon) = \sqrt{\frac{\beta(\mathbf{y})}{2\pi}} \exp\left(-\frac{\beta(\mathbf{y})}{2}\upsilon^2\right). \tag{8.74}$$

Substituting now the trial distribution Eq. (8.73) into Eq. (8.63), we get

$$\begin{aligned} &H\_{1}(\mathbf{y},v-c)\frac{1}{\sqrt{\beta(\mathbf{y})}}\Big[\rho'(\mathbf{y})-\beta(\mathbf{y})[F^{\*}(\mathbf{y})-\xi\Gamma(\mathbf{y})c][\rho(\mathbf{y})-\bar{\rho}] \\ &-\frac{\beta'(\mathbf{y})}{\beta(\mathbf{y})}[\rho(\mathbf{y})-\bar{\rho}]\Big]-H\_{1}(\mathbf{y},v)\frac{1}{\sqrt{\beta(\mathbf{y})}}\Bigg[\beta(\mathbf{y})F^{\*}(\mathbf{y})\bar{\rho}+\frac{\beta'(\mathbf{y})}{\beta(\mathbf{y})}\bar{\rho}\Big] \\ &+\xi[\Gamma(\mathbf{y})-\beta(\mathbf{y})][(\rho(\mathbf{y})-\bar{\rho})H\_{2}(\mathbf{y},v-c)+\bar{\rho}H\_{2}(\mathbf{y},v)] \\ &-\frac{\beta'(\mathbf{y})}{2\sqrt{\beta^{3}(\mathbf{y})}}\Big[(\rho(\mathbf{y})-\bar{\rho})H\_{3}(\mathbf{y},v-c)+\bar{\rho}H\_{3}(\mathbf{y},v)-c\beta^{1/2}H\_{2}(\mathbf{y},v)\Big]=0,\end{aligned} \tag{8.75}$$

where prime denotes derivative w.r.t. *y*, and *H*1*(y, v)*, *H*2*(y, v)* and *H*3*(y, v)* are the Hermite functions of order 1, 2 and 3, respectively, defined by the recursion relation:

$$H\_{\boldsymbol{\upsilon}+1}(\mathbf{y}, \boldsymbol{\upsilon}) = -\frac{1}{\sqrt{\beta(\mathbf{y})}} \frac{\partial H\_{\boldsymbol{\upsilon}}(\mathbf{y}, \boldsymbol{\upsilon})}{\partial \boldsymbol{\upsilon}}.$$

The trial solution fails to solve Eq. (8.63). However, if we limit ourselves to consider only the two lowest moments of the probability distribution, i.e. if after multiplying by *(v* − *c)*, we integrate Eq. (8.75) over *v*, we obtain the following condition which gives the equation for the density profile:

$$\frac{1}{\beta(\mathbf{y})} \frac{d\rho(\mathbf{y})}{d\mathbf{y}} - [F(\mathbf{y}) - \xi c] \rho - \frac{\beta'(\mathbf{y})}{\beta^2(\mathbf{y})} \rho - \xi c \Gamma(\mathbf{y}) \bar{\rho} = 0. \tag{8.76}$$

If we continue the projection procedure beyond the first order in *(v* − *c)* there will be an error in the equation for the second moment, which becomes inconsistent with the value of the second moment imposed by the trial distribution (which, in fact, is already fixed by the trial form and therefore does not contain enough parameters to satisfy the extra conditions.)

The ansatz for the phase-space distribution gives the following expression for the momentum flux:

$$
\Pi(\mathbf{y}) = \frac{\rho(\mathbf{y})}{\beta(\mathbf{y})} + c^2[\rho(\mathbf{y}) - \bar{\rho}].\tag{8.77}
$$

Note that Eq. (8.76) is perfectly equivalent to Eq. (8.65) when the latter is endowed with a closure, indeed represented by Eq. (8.77). The static UCNA approximation is recovered by setting the arbitrary function *β(y)* = *(y)* and *c* = 0, (i.e. *J* = 0).

In order to deal with possible zeroes of the function *β(y)* let us solve the nonlinear differential equation for the profile using the auxiliary function:

$$m(\mathbf{y}) = \frac{\rho(\mathbf{y})}{\beta(\mathbf{y})} \tag{8.78}$$

that satisfies the equation

$$\frac{dn(\mathbf{y})}{d\mathbf{y}} = \left[F(\mathbf{y}) - \xi c\right] \beta(\mathbf{y}) n(\mathbf{y}) + \xi c \Gamma(\mathbf{y}) \bar{\rho}.\tag{8.79}$$

Then, defining the effective potential

$$w(\mathbf{y}) = \int\_{-L}^{\mathbf{y}} ds \beta(s) \frac{dU(s)}{ds} + \xi c \int\_{-L}^{\mathbf{y}} ds [\beta(s) - 1],$$

allows us to rearrange Eq. (8.79) as follows:

$$\frac{dn(\mathbf{y})}{d\mathbf{y}} = \left[ -\frac{d}{d\mathbf{y}} w(\mathbf{y}) - \xi c \right] n(\mathbf{y}) + \xi c \Gamma(\mathbf{y}) \bar{\rho}.\tag{8.80}$$

The solution of the inhomogeneous equation is then

$$m(\mathbf{y}) = Ae^{-w(\mathbf{y}) - \xi c\mathbf{y}} + \xi c\bar{\rho}e^{-w(\mathbf{y}) - \xi c\mathbf{y}} \int\_{-L}^{\mathbf{y}} ds e^{w(\mathbf{s}) + \xi c\mathbf{s}} \Gamma(\mathbf{s}),\tag{8.81}$$

$$A = n(-L)e^{w(-L) - \\\\\zeta cL}.\tag{8.82}$$

By construction *w(*−*L)* = 0 and one may verify that *n(L)* = *n(*−*L)*, but *w(L)* = *w(*−*L)*. Eventually, one has

$$n(\mathbf{y}) = n(L)e^{-[w(\mathbf{y}) - w(-L)] - c\xi(\mathbf{y} + L)}$$

$$\times \left\{ 1 + [e^{2\xi cL}e^{w(L) - w(-L)} - 1] \frac{\int\_{-L}^{\mathcal{Y}} ds \, e^{w(\mathbf{s}) + c\xi s} \Gamma(\mathbf{s})}{\int\_{-L}^{L} ds \, e^{w(\mathbf{s}) + c\xi s} \Gamma(\mathbf{s})} \right\},\tag{8.83}$$

and, from Eq. (8.78), the density profile reads

$$\begin{split} \rho(\mathbf{y}) &= \frac{\rho(L)}{\beta(L)} \beta(\mathbf{y}) e^{-[w(\mathbf{y}) - w(-L)] - c\xi(\mathbf{y} + L)} \\ &\times \left\{ 1 + [e^{2\xi cL} e^{w(L) - w(-L)} - 1] \frac{\int\_{-L}^{\mathbf{y}} ds e^{w(\mathbf{s}) + c\xi s} \Gamma(\mathbf{s})}{\int\_{-L}^{L} ds e^{w(\mathbf{s}) + c\xi s} \Gamma(\mathbf{s})} \right\}, \end{split} \tag{8.84}$$

where *ρ(L)* is fixed by the normalisation. The explicit expression for the density *ρ*¯ is

$$
\bar{\rho} = \frac{1}{\zeta c} \frac{\rho(L)}{\beta(L)} \frac{e^{w(L) + c\xi L} - e^{w(-L) - c\xi L}}{\int\_{-L}^{L} dy e^{w(\mathfrak{y}) + c\xi \circ} \Gamma(\mathfrak{y})}.
$$

We empirically set *β(y)* = *(y)* in the regions where *(y)* ≥ 0, and *β(y)* = 0 otherwise. Then, the expression (8.84) can be evaluated numerically and in Fig. 8.3 we compare the analytical prediction with numerical simulations, in the case of the following external potential:

$$U(\mathbf{y}) = U\_0 \left[ \tanh\left(\frac{\mathbf{y} + 1}{\xi}\right) - \tanh\left(\frac{\mathbf{y} - 1}{\xi}\right) \right],\tag{8.85}$$

characterised by the steepness 1*/ξ* .

# *8.7.3 Average Drag Force*

Our analytical approach allows us to obtain an estimate for the average drag force exerted by the active fluid on the moving wall, defined as:

**Fig. 8.3** Density profiles for the static case *c* = 0 (left) and for the moving potential with *c* = 0*.*2 (right). Red lines represent analytical predictions, while black dots are numerical simulations. Other parameters are *U*<sup>0</sup> = 0*.*5*, ξ* = 0*.*1*, ζ* = 2 [29]

**Fig. 8.4** Left: Comparison between the analytical predictions (dotted lines) and the numerical simulations (symbols) for the average drag force exerted by the active fluid on the moving wall. Right: Comparison between the maximal drag force computed in the active and passive case [29]

$$
\langle F \rangle = \int\_{-L}^{L} d\mathbf{y} \, F(\mathbf{y}) \rho(\mathbf{y}). \tag{8.86}
$$

The comparison of the analytical prediction with numerical simulations for the drag force is shown in Fig. 8.4. Note the non-monotonic behaviour of the force–velocity relation is characterised by a maximum value of the force *F*max for a particular velocity *c*∗.

In order to highlight the new physical effects arising due to the coupling of selfpropulsion and a stationary current, it is useful to compare the behaviour observed in the active particle model with the one obtained in the case of a moving potential in a (passive) thermal bath. In the latter situation, the noise term acting in the stochastic equation for the particle velocity is a delta-correlated noise of amplitude 2*/ζ* . In the right panel of Fig. 8.4 we show the maximum value of the drag force *F*max as a function of 1*/(ξ ζ )* in both models. The qualitative difference between the two behaviours relies on the observation that in the active case the average drag force can increase indefinitely by reducing the parameter *ξ* , which characterises the steepness of the travelling potential.

# **8.8 Conclusions**

In this chapter, we have reviewed the recent developments of the theory of active particles driven by coloured noise within an approximate scheme, the UCNA. Such a method has the great advantage of providing predictions and equations much simpler with respect to other methods. The reason is that the adiabatic approximation at the basis of the method eliminates the faster degrees of freedom, the velocity in the case of the GCN and of the ABP.

# **Appendix 1: Entropy Production and Heat Flux in the GCN**

Let us consider the elementary case of a single active particle in one dimension driven by Gaussian coloured noise, with phase-space distribution function *p(x, v, t)*. We derive the equations for the entropy production and entropy flux in phase-space *(x, v)* and we shall use small letters to distinguish probabilities and thermodynamic variables from the configurational variables of UCNA. We start from the Fokker–Planck equation

$$\frac{\partial p}{\partial t} + v \frac{\partial p}{\partial x} - \frac{1}{\chi \tau} \frac{\partial U}{\partial x} \frac{\partial p}{\partial v} = \frac{1}{\tau} \frac{\partial}{\partial v} \left( \frac{D}{\tau} \frac{\partial}{\partial v} + \Gamma v \right) p. \tag{8.87}$$

Within the GCN, we consider the time derivative of the total Shannon entropy production defined as

$$\begin{split} \dot{s}(t) &= -\iint d\mathbf{x}dv \frac{\partial}{\partial t} p(\mathbf{x}, v, t) \ln p(\mathbf{x}, v, t) \\ &= \iint d\mathbf{x}dv \ln p(\mathbf{x}, v, t) \operatorname{div} \mathbf{I} = \iint d\mathbf{x}dv \operatorname{div} \left( \frac{\mathbf{I}}{p} \right) p, \end{split} \tag{8.88}$$

where the *(x, v)* components of the current vector **I** are:

$$I\_{\mathbf{x}} = vp(\mathbf{x}, v, t) \tag{8.89}$$

$$I\_v = -\frac{1}{\chi\tau} \frac{\partial U}{\partial \chi} p - \frac{\Gamma}{\tau} vp - \frac{D}{\tau^2} \frac{\partial p}{\partial v} \tag{8.90}$$

Let us define the temperature *T* = *D/τ* and the local temperature *θ (x)* = *T / (x)*. Now, the total time derivative of the entropy Eq. (8.88) can be written as the sum, *s(t)* ˙ = ˙*ss(t)* + ˙*sm(t)* (see Sect. 2.4). Explicitly, we find after integrating by parts Eq. (8.88) the following expressions:

$$\dot{s}\_s(t) = \frac{1}{\pi} \frac{1}{T} \iint d\mathbf{x} d\upsilon \frac{1}{p} \left( \Gamma(\mathbf{x}) \upsilon p + T \frac{\partial p}{\partial \upsilon} \right)^2 \tag{8.91}$$

and the entropy flux

$$\dot{s}\_{\mathfrak{m}}(t) = -\iint d\mathbf{x}dv \frac{1}{\theta(\mathbf{x})} \frac{\Gamma(\mathbf{x})}{\tau} \Big[ v^2 p(\mathbf{x}, v, t) + \theta(\mathbf{x}) v \frac{\partial}{\partial v} p(\mathbf{x}, v, t) \Big]. \tag{8.92}$$

The dimensional form of the total energy *(x, v)* and of the heat flux are, respectively:

$$
\epsilon(t) = \frac{1}{2}v^2 + \frac{U(\mathbf{x})}{\mathbf{r}\chi},\tag{8.93}
$$

and

$$
\langle \dot{q}(t) \rangle = \iint d\mathbf{x} dv \epsilon(t) \frac{\partial}{\partial t} p(\mathbf{x}, v, t) \tag{8.94}
$$

$$
\langle \dot{q} \rangle = -\iint dx dv \frac{\Gamma(\mathbf{x})}{\tau} \Big[ v^2 p(\mathbf{x}, v, t) + \theta(\mathbf{x}) v \frac{\partial}{\partial v} p(\mathbf{x}, v, t) \Big]. \tag{8.95}
$$

It is suggestive to rewrite

$$
\dot{q}(t) = \int d\mathbf{x} \,\dot{\tilde{q}}(\mathbf{x}, t), \tag{8.96}
$$

$$\dot{s}\_m(t) = \int d\mathbf{x} \frac{1}{\theta(\mathbf{x})} \dot{\tilde{q}}(\mathbf{x}, t),\tag{8.97}$$

with a local density of heat flux defined as

$$\begin{split} \dot{\tilde{q}}(\mathbf{x},t) &= -\frac{\Gamma(\mathbf{x})}{\tau} \int dv \Big[ v^2 p(\mathbf{x},v,t) + \theta(\mathbf{x}) v \frac{\partial}{\partial v} p(\mathbf{x},v,t) \Big] \\ &= -\frac{1}{\tau} \frac{T}{\theta(\mathbf{x})} n(\mathbf{x},t) \Big[ \langle v^2 \rangle\_{\mathbf{x}} - \theta(\mathbf{x}) \Big], \end{split} \tag{8.98}$$

where *n(x, t)* <sup>=</sup> *dvp(x, v, t)* and *n(x, t)v*2*<sup>x</sup>* <sup>=</sup> *dvv*2*p(x, v, t)* , with *v*2*<sup>x</sup>* the mean squared velocity at given position. Expression Eq. (8.97) represents an interesting connection between the local entropy production of the medium (or entropy flux) and the local heat flux divided by the same local temperature *θ (x)* = *T / (x)* featuring in the approximate detailed balance solution, Eq. (8.103).

Define the dissipative components of the current as:

$$I\_{\chi}(\chi, v) = 0,$$

$$\Gamma \qquad \tau$$

$$
\tilde{I}\_v(\mathbf{x}, v) = -\frac{\Gamma}{\mathfrak{r}} vp - \frac{T}{\mathfrak{r}} \frac{\partial p}{\partial v} \mathbf{.}
$$

It is clear that in the GCN in the steady state while the time derivative of the entropy vanishes *s*˙ = 0, its two contributions are not necessarily zero.

$$\dot{s}\_s(t) = \frac{\pi}{T} \iint d\mathbf{x} dv \frac{\tilde{I}\_v^2(\mathbf{x}, t)}{p(\mathbf{x}, v, t)}\tag{8.99}$$

$$\dot{s}\_m(t) = \iint dx dv \,\frac{1}{\theta(\mathbf{x})} \, v \,\tilde{I}\_v(\mathbf{x}, v, t) \tag{8.100}$$

*s*˙*<sup>s</sup>* is an entropy production rate which is always non-negative, while *s*˙*<sup>m</sup>* is the entropy flux due to heat exchanges between the system and the surroundings and can have either signs.

# **Appendix 2: Absence of Detailed Balance Condition in the GCN**

As in Appendix 1, let us consider the elementary case of a single active particle in one dimension driven by Gaussian coloured noise. The probability current, **I***(x, v)*, is the two dimensional vector, comprising both reversible and irreversible contribution given by Eq. (8.90). Let *ps(x, v)* be a steady state solution of the Fokker–Planck equation, such that

$$\text{div}\mathbf{I} = \frac{\partial I\_x}{\partial x} + \frac{\partial I\_v}{\partial v} = 0. \tag{8.101}$$

The detailed balance condition requires that in the steady state [30] the irreversible part of the current, represented by the terms proportional to *ζ* , must vanish:

$$-\frac{\Gamma(\alpha)}{\pi}v p\_s(\mathbf{x}, \upsilon) - \frac{T}{\pi} \frac{\partial p\_s}{\partial \upsilon} = 0\tag{8.102}$$

*ps* must be the product of a function of the position, *π(x)*, times a "local" Maxwellian whose velocity variance is also position-dependent:

$$p\_s(\mathbf{x}, v) = \pi(\mathbf{x}) \exp\left(-\frac{\Gamma(\mathbf{x})}{2T} v^2\right). \tag{8.103}$$

In virtue of Eq. (8.101) the reversible part of the current vector *(vps,* −*U ps/(τ γ ))* must fulfil the condition:

$$
\left[v\frac{\partial}{\partial \mathbf{x}} - \frac{1}{\tau \nu} \frac{dU(\mathbf{x})}{d\mathbf{x}} \frac{\partial}{\partial v}\right] p\_s(\mathbf{x}, v) = 0. \tag{8.104}
$$

Plugging the distribution Eq. (8.103) into Eq. (8.104) we obtain:

$$v\left(\frac{1}{\pi(\mathbf{x})}\frac{d\pi(\mathbf{x})}{d\mathbf{x}} + \frac{1}{\pi\chi}\frac{dU(\mathbf{x})}{d\mathbf{x}}\frac{\Gamma(\mathbf{x})}{T} - \frac{1}{2}\frac{d\Gamma(\mathbf{x})}{d\mathbf{x}}\frac{v^2}{T}\right)$$

$$\pi(\mathbf{x})\exp\left(\frac{-\Gamma(\mathbf{x})}{2T}v^2\right) \neq 0,\tag{8.105}$$

and conclude that a function *π(x)* satisfying Eq. (8.104) only exists when *(x)* is a constant 0. This condition occurs for *τ* → 0, which is the equilibrium limit of the model, or when the potential is a linear or parabolic function of *x*. In conclusion, apart from the special case 0, the Kramers equation (8.11) does not satisfy the detailed balance condition. However, we can determine an approximate steady solution consistent with the UCNA under the form:

$$
\pi\_{\rm trial}(\mathbf{x}) = \Gamma^{3/2}(\mathbf{x}) \exp\left(-\frac{U(\mathbf{x}) + \frac{\mathbf{r}}{2\gamma}U'(\mathbf{x})^2}{D\chi}\right). \tag{8.106}
$$

In spite of the fact that such trial solution is not divergence-free, i.e. div**I** = 0, the first three velocity moments, obtained by multiplying the divergence by *(*1*, v, v*2*)*, respectively, and integrating w.r.t. *v*, vanish, so that in this subspace the zero divergence condition holds.

**Acknowledgement** A.S. acknowledges support from "Programma VALERE" of University of Campania "L. Vanvitelli".

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 9 Quadrature-Based Lattice Boltzmann Models for Rarefied Gas Flow**

**Victor E. Ambrus, and Victor Sofonea**

# **9.1 Introduction**

At non-negligible values of the Knudsen number Kn (defined as the ratio between the mean free path of the fluid particles in a gas and the characteristic length of the domain), the Navier–Stokes equations lose applicability [1, 2]. Such rarefied gas flows can be approached within the framework of the Boltzmann equation [3– 5]. This equation describes the six-dimensional phase-space evolution of the distribution function *f* , where *f (t, x, p)d*3*xd*3*p* gives the number of particles at time *t* which are contained in an infinitesimal volume *d*3*x* centred on *x*, having momenta in an infinitesimal range *d*3*p* about *p*. Because of its complexity, the Boltzmann equation can be solved analytically only in a very limited number of cases. Alternatively, numerous well-established approaches to the numerical solutions of the Boltzmann equation are now currently used for academic or engineering purposes, of which we only mention the direct simulation Monte Carlo (DSMC) technique [6], the discrete velocity models (DVMs) [7–9], the discrete unified gas-kinetic scheme (DUGKS) [10–12] and the lattice Boltzmann (LB) models [13–20].

The LB models are a particular type of DVMs and are derived from the Boltzmann equation using a simplified version of the collision operator, as well as an appropriate discretisation of the momentum space, which ensure the recovery of the moments of the distribution function *f* up to a certain order *N*. Originally derived nearly 30 years ago from the lattice gas automata [17, 19, 20], the LB

Department of Physics, West University of Timis,oara, Timis,oara, Romania

V. Sofonea (-)

V. E. Ambrus,

Center for Fundamental and Advanced Technical Research, Romanian Academy, Timis,oara, Romania

<sup>©</sup> The Editor(s) (if applicable) and The Author(s) 2019

F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9\_9

models were primarily designed to recover the hydrodynamics of fluid systems at the Navier–Stokes level. The LB models inherited the collision-streaming concept from their ancestors, according to which the velocities of the fluid particles are aligned along the lattice links such that after one time step *δt*, each particle arrives at a neighbouring node [13, 14, 17, 19, 21, 22].

One disadvantage of the collision-streaming paradigm is the increasing difficulty to approach fluid systems far from equilibrium (e.g., rarefied gases or micro/nanoscale flowing fluids) using suitable LB models. In this case, the accurate recovery of specific effects in channel flow at large values of Kn, such as the velocity slip and the temperature jump at the channel walls [1, 2], requires that higher order moments of the single particle distribution function *f* are ensured. Since the moments of *f* are derived by integration in the momentum space, their numerical computation involves the use of convenient quadrature methods. When using a quadrature method, the moments of the single particle distribution *f* (up to a certain order *N*) are exactly recovered by sums over a finite set of momentum vectors **p***k*, 1 ≤ *k* ≤ *K* [13, 14, 17–19, 21–26]. As the fluid system is farther from the equilibrium and the characteristic value of the Knudsen number increases, the number *K* of the momentum vectors (i.e., the quadrature points) should also be increased, as it will be shown later. This task becomes more and more elaborated if one wants to keep the particles hopping from a lattice node to another one during a single time step [27–30].

An alternative to the collision-streaming paradigm is provided by the off-lattice LB models, where the distribution functions are evolved in the lattice nodes using finite-difference, finite-volume or interpolation schemes [31–34]. A fourthorder, off-lattice LB model for the simulation of thermal flows in the continuum regime (small values of the Knudsen number), where the fluid density, velocity and temperature fields are derived from a single set of distribution functions, was proposed by Watari and Tsutahara [35] for 2D flows and subsequently extended to the 3D case [36–38]. Off-lattice LB models of any order *N* can be easily constructed using the Gauss quadrature method in the velocity space [18, 23, 25, 26, 39, 40].

Another challenge for microfluidics simulations is due to the implementation of boundary conditions. In general, the particle–wall interaction governs the distribution of particles emerging from the wall back into the fluid. Since the distribution of particles travelling from the fluid towards the wall is essentially arbitrary, the distribution function becomes discontinuous near the wall [41, 42]. Examples of boundary conditions include the diffuse-spectral [9] and Cercignani– Lampis [43] particle–wall interaction models; however, for simplicity, we restrict the analysis to the simpler diffuse reflection model, which is a limiting case of both models mentioned above. According to the diffuse reflection paradigm, the reflected particles follow a Maxwell–Boltzmann distribution corresponding to the wall temperature and velocity. In order to accurately compute the incident and emergent fluxes required to impose diffuse reflection (kinetic) boundary conditions, it is convenient to discretise the velocity set based on half-range Gauss quadrature methods. Such techniques were also used in the frame of DVMs [44, 45] and more recently, they were adapted for the LB method [24, 26, 46, 47].

# **9.2 Generalities**

The Boltzmann equation for a force-driven flow reads:

$$
\partial\_t f + \frac{\mathbf{p}}{m} \cdot \nabla f + \mathbf{F} \cdot \nabla\_\mathbf{p} f = -\frac{1}{\tau} (f - f^{\text{eq}}), \tag{9.1}
$$

where on the right-hand side we have used the Bhatnagar–Gross–Krook (BGK) single-relaxation time approximation of the collision term [48]. The distribution function *f* ≡ *f (***x***,* **p***,t)* represents the density of particles at position **x** and time *t*, having momentum **p**, while

$$f^{\text{eq}} = n g\_{\text{x}} g\_{\text{y}} g\_{\text{z}},\tag{9.2}$$

$$\mathbf{g}\_{\alpha} = \frac{1}{\sqrt{2\pi m k\_B T}} \exp\left[-\frac{(p\_{\alpha} - m u\_{\alpha})^2}{2mk\_B T}\right], \qquad \alpha \in \{x, y, z\} \tag{9.3}$$

is the Maxwell–Boltzmann equilibrium distribution function corresponding to local thermal equilibrium. The force **F** ≡ **F***(***x***)* in Eq. (9.1) encapsulates all external forces acting on these particles.

The local quantities describing the gas flow at the macroscopic level, namely the particle number density *n*, the velocity **u**, the stress tensor *Tαβ* (*α, β* = 1*,* 2*,* 3) and the heat flux *q*, can be obtained as moments of *f* :

$$\begin{pmatrix} n \\ \rho \mathbf{u} \\ T\_{\alpha\beta} \\ \mathbf{e} \\ \mathbf{e} \end{pmatrix} = \int d^3 p \begin{pmatrix} 1 \\ \mathbf{p} \\ \mathbf{p} \\ \xi\_{\alpha}\xi\_{\beta}/m \\ \xi\xi^2 \\ \frac{\xi\xi^2}{2m^2} \end{pmatrix} f,\tag{9.4}$$

where *ξα* = *pα* − *muα* is the peculiar momentum and *ρ* = *mn* is the mass density. The pressure *P* is defined through:

$$P = \frac{1}{3} \sum\_{\alpha} T\_{\alpha \alpha},\tag{9.5}$$

while the temperature is obtained via *T* = *P /nkB*, which represents the equation of state for an ideal gas (*kB* is the Boltzmann constant). More generally, it is convenient to introduce the following notation for the moments of *f* and *f* eq:

$$
\begin{pmatrix} M\_{\boldsymbol{s}\_{\mathcal{X}}, \boldsymbol{s}\_{\mathcal{Y}}, \boldsymbol{s}\_{\mathcal{Z}}} \\\\ M\_{\boldsymbol{s}\_{\mathcal{X}}, \boldsymbol{s}\_{\mathcal{Y}}, \boldsymbol{s}\_{\mathcal{Z}}} \end{pmatrix} = \int d^3 p \begin{pmatrix} f \\\\ f^{\rm eq} \end{pmatrix} (p\_{\boldsymbol{x}})^{\boldsymbol{s}\_{\mathcal{X}}} (p\_{\boldsymbol{y}})^{\boldsymbol{s}\_{\mathcal{Y}}} (p\_{\boldsymbol{z}})^{\boldsymbol{s}\_{\mathcal{Z}}}.\tag{9.6}
$$

Since mass *m*, momentum **p** and energy **p**2*/*2*m* are collision invariant quantities, we have:

$$m = M\_{0,0,0} = M\_{0,0,0}^{\text{(eq)}},\tag{9.7}$$

$$\begin{pmatrix} \rho u\_{\times} \\\\ \rho u\_{\times} \\\\ \rho u\_{\times} \end{pmatrix} = \begin{pmatrix} M\_{1,0,0} \\\\ M\_{0,1,0} \\\\ M\_{0,0,1} \end{pmatrix} = \begin{pmatrix} \mathcal{M}\_{1,0,0}^{\text{(eq)}} \\\\ \mathcal{M}\_{0,1,0}^{\text{(eq)}} \\\\ \mathcal{M}\_{0,0,1}^{\text{(eq)}} \end{pmatrix},\tag{9.8}$$

$$\begin{split} \frac{3}{2} n K\_B T + \frac{1}{2} \rho \mathbf{u}^2 &= \frac{1}{2m} \left( M\_{2,0,0} + M\_{0,2,0} + M\_{0,0,2} \right) \\ &= \frac{1}{2m} \left( M\_{2,0,0}^{\text{(eq)}} + M\_{0,2,0}^{\text{(eq)}} + M\_{0,0,2}^{\text{(eq)}} \right) . \end{split} \tag{9.9}$$

The basic steps for the construction of an off-lattice LB model are [49]:


A common feature of all LB models is that the conservation equations for the particle number density *n*, macroscopic momentum density *ρ***u** and temperature *T* (for thermal models) are exactly recovered. Regardless of the chosen discretisation of the momentum space, the Boltzmann–BGK equation (9.1) is replaced by a set of *K* equations:

$$
\partial\_t f\_k + \frac{\mathbf{p}\_k}{m} \cdot \nabla f\_k + \mathbf{F} \cdot (\nabla\_\mathbf{p} f)\_k = -\frac{1}{\tau} (f\_k - f\_k^{\text{eq}}), \tag{9.10}
$$

where *fk* (*k* = 1*,* 2*,...K*) represents the distribution function corresponding to the discrete momentum **p***k*. The total number *K* of discrete momenta are chosen such that the moments Eq. (9.6) are exactly recovered:

### 9 Quadrature-Based Lattice Boltzmann Models for Rarefied Gas Flow 275

$$
\begin{pmatrix} M\_{\boldsymbol{s}\_{\mathcal{X}},\boldsymbol{s}\_{\mathcal{Y}},\boldsymbol{s}\_{\mathcal{Z}}} \\ \mathbf{M}\_{\boldsymbol{s}\_{\mathcal{X}},\boldsymbol{s}\_{\mathcal{Y}},\boldsymbol{s}\_{\mathcal{Z}}}^{\mathrm{(eq)}} \end{pmatrix} = \sum\_{k=1}^{K} \begin{pmatrix} f\_{k} \\ \boldsymbol{f}\_{k}^{\mathrm{eq}} \end{pmatrix} (\mathbf{p}\_{k,\boldsymbol{x}})^{\mathrm{s}\_{\mathcal{X}}} (\mathbf{p}\_{k,\boldsymbol{y}})^{\mathrm{s}\_{\mathcal{Y}}} (\mathbf{p}\_{k,\boldsymbol{z}})^{\mathrm{s}\_{\mathcal{Z}}}.\tag{9.11}
$$

The order *N* of a given LB model is related to the maximum value of the exponents *sx , sy , sz* for which the above equality holds.

# **9.3 One-Dimensional Quadrature-Based LB Models**

In this section, the procedure for implementing the full-range and half-range Gauss– Hermite quadratures on a single axis of the momentum space will be discussed in Sects. 9.3.1 and 9.3.2, respectively. For convenience, in this section we will refer to the one-dimensional (1*D*) equivalent of the Boltzmann–BGK equation (9.1):

$$
\partial\_t f + \frac{p}{m} \partial\_x f + F \partial\_p f = -\frac{1}{\pi} (f - f^{\text{eq}}).\tag{9.12}
$$

After discretisation, this equation is replaced by a set of *K* = Q equations, where Q is the number of quadrature points on the entire axis:

$$
\partial\_t f\_k + \frac{p\_k}{m} \partial\_x f\_k + F(\partial\_p f)\_k = -\frac{1}{\tau} (f\_k - f\_k^{\text{eq}}).\tag{9.13}
$$

The general expression of the total number *K* of discrete momenta employed by a *D*-dimensional LB model will be introduced in Sect. 9.4.

# *9.3.1 Full-Range Gauss–Hermite Quadrature*

Let us consider integrals of *f* and *f* eq along the axis of the 1*D* momentum space:

$$
\begin{pmatrix} M\_s \\\\ M\_s^{\text{(eq)}} \end{pmatrix} = \int\_{-\infty}^{\infty} dp \begin{pmatrix} f \\\\ f^{\text{eq}} \end{pmatrix} p^s. \tag{9.14}
$$

For the purpose of this section, we can consider *<sup>f</sup>* eq <sup>=</sup> *ng*, where *<sup>g</sup>* is expressed as in Eq. (9.3), but without using the subscript *α*. The function *g* can be expanded with respect to the full-range Hermite polynomials {*H(p),*  = 0*,* 1*,...*} as follows [25, 26]:

$$\mathcal{g} = \frac{\omega(\overline{p})}{p\_0} \sum\_{\ell=0}^{\infty} \frac{1}{\ell!} \mathcal{G}\_{\ell} H\_{\ell}(\overline{p}), \quad \mathcal{G}\_{\ell} = \sum\_{s=0}^{\lfloor \ell/2 \rfloor} \frac{\ell!}{2^s s! (\ell - 2s)!} \left( \frac{m K\_B T}{p\_0^2} - 1 \right)^s \left( \frac{mu}{p\_0} \right)^{\ell - 2s}, \tag{9.15}$$

where · is the floor function, G is the -th expansion coefficient and *p* = *p/p*<sup>0</sup> is the particle momentum expressed with respect to some arbitrary momentum scale *p*0. The full-range Hermite polynomials [18, 26, 39, 40] satisfy the following orthogonality relation with respect to the weight function *ω(p)*:

$$\int\_{-\infty}^{\infty} d\overline{p} \,\omega(\overline{p}) H\_{\ell}(\overline{p}) H\_{\ell'}(\overline{p}) = \ell! \,\delta\_{\ell,\ell'}, \qquad \omega(\overline{p}) = \frac{1}{\sqrt{2\pi}} e^{-\overline{p}^2/2}. \tag{9.16}$$

The expansion coefficients G given in Eq. (9.15) were obtained according to:

$$\mathcal{G}\_{\ell} = \int\_{-\infty}^{\infty} dp \text{ g } H\_{\ell}(\overline{p}). \tag{9.17}$$

Substituting Eq. (9.15) into Eq. (9.14) gives:

$$M\_s^{(\text{eq})} = p\_0^s \sum\_{\ell=0}^{\infty} \frac{1}{\ell!} \mathcal{G}\_\ell \int\_{-\infty}^{\infty} d\overline{p} \,\omega(\overline{p}) \, H\_\ell(\overline{p}) \, \overline{p}^s. \tag{9.18}$$

At finite values of *s* and , the Gauss–Hermite quadrature can be applied to recover the integral over *p* on the entire momentum axis, using the following prescription:

$$\int\_{-\infty}^{\infty} d\overline{p}\,\omega(\overline{p})P\_s(\overline{p}) \cong \sum\_{k=1}^{\mathcal{Q}} w\_k^H P\_s(\overline{p}\_k),\tag{9.19}$$

where *Ps(p)* is a polynomial of order *s* in *p* and the *Q* quadrature points *pk* (*k* = 1*,* 2*,...,Q*) are the roots of the Hermite polynomial of order *Q*, i.e., *HQ(pk)* = 0. Note that *K* = Q = *Q* holds only in a one-dimensional LB model based on fullrange Gauss–Hermite quadratures.

Since these roots correspond to the integration over the full momentum space axis, in the case of the full-range Gauss–Hermite quadrature, the number of quadrature points on the entire axis Q is equal to the quadrature order *Q*. The quadrature weights *w<sup>H</sup> <sup>k</sup>* are given by:

$$w\_k^H = \frac{\mathcal{Q}!}{[H\_{\mathcal{Q}+1}(\overline{p}\_k)]^2}.\tag{9.20}$$

The equality in Eq. (9.19) is exact if 2*Q>s*. In an LB simulation, *Q* is fixed at runtime. Thus, in order to ensure the exact recovery of *<sup>M</sup>(*eq*) <sup>s</sup>* in Eq. (9.18), the sum over in Eq. (9.15) must be truncated at a finite value = *N*. Setting *Q>N* ensures the exact recovery of the first *N* + 1 moments (i.e., *s* = 0*,* 1*,...N*) of *f* eq, since the terms of higher order in the expansion of *g* are orthogonal to all polynomials *Ps(p)* of orders 0 ≤ *s* ≤ *N*, by virtue of the orthogonality relation given by Eq. (9.16). This allows *<sup>M</sup>(*eq*) <sup>s</sup>* to be obtained as:

$$M\_s^{\text{(eq)}} = \sum\_{k=1}^{\mathcal{Q}} f\_k^{\text{eq}} p\_k^s, \qquad f\_k^{\text{eq}} = n g\_k^H, \qquad \mathbf{g}\_k^H = \frac{w\_k^H p\_0}{\alpha(\overline{p}\_k)} \mathbf{g}^{H, (N)}(p\_k), \tag{9.21}$$

where *pk* <sup>=</sup> *<sup>p</sup>*0*pk* are the discrete momenta and the notation *<sup>g</sup>H,(N )(p)* indicates that the polynomial expansion in Eq. (9.15) of *g(p)* is truncated at order = *N* with respect to the full-range Hermite polynomials. For definiteness, we list below the expression for *g<sup>H</sup> <sup>k</sup>* [25, 26]:

$$\log\_k^{H,(N)} = w\_k^H \sum\_{\ell=0}^N H\_\ell(\overline{p}\_k) \sum\_{s=0}^{\lfloor \ell/2 \rfloor} \frac{1}{2^s s! (\ell - 2s)!} \left(\frac{mT}{p\_0^2} - 1\right)^s \left(\frac{mK\_B T}{p\_0}\right)^{\ell - 2s} . \tag{9.22}$$

The momentum derivative *∂pf* can be written as:

$$(\partial\_p f)\_k = \sum\_{k'=1}^{\mathcal{Q}} \mathcal{K}\_{k,k'} f\_{k'},\tag{9.23}$$

where the kernel K*k,k* has the following components [49, 50]:

$$\mathcal{H}\_{k,k'} = -\frac{w\_k^H}{p\_0} \sum\_{\ell=0}^{\mathcal{Q}-2} \frac{1}{\ell!} H\_{\ell+1}(\overline{p}\_k) H\_\ell(\overline{p}\_{k'}).\tag{9.24}$$

# *9.3.2 Half-Range Gauss–Hermite Quadrature*

The half-range paradigm is inspired from the discontinuous nature of the distribution function due to the interaction with the channel walls. Such discontinuities naturally induce a split of the momentum space integration domain in two hemispheres, corresponding to particles travelling towards and away from the wall. In order to encompass the discontinuous nature of the distribution function in a onedimensional LB model for confined fluid flow, it is convenient to introduce the half-range moments *M*± *<sup>s</sup>* and *<sup>M</sup>(*eq*),*<sup>±</sup> *<sup>s</sup>* (*<sup>s</sup>* <sup>=</sup> <sup>0</sup>*,* <sup>1</sup>*,* <sup>2</sup>*,...* ) of *<sup>f</sup>* and *<sup>f</sup> (*eq*)* through:

$$
\begin{pmatrix} M\_s^+ \\\\ M\_s^{\text{(eq)},+} \end{pmatrix} = \int\_0^\infty dp \begin{pmatrix} f(p) \\\\ f^{\text{eq}}(p) \end{pmatrix} p^s, \quad \begin{pmatrix} M\_s^- \\\\ M\_s^{\text{(eq)},-} \end{pmatrix} = \int\_{-\infty}^0 dp \begin{pmatrix} f(p) \\\\ f^{\text{eq}}(p) \end{pmatrix} p^s. \tag{9.25}
$$

The recovery of the half-range integrals in Eq. (9.25) can be achieved using the halfrange Gauss–Hermite quadrature, defined with respect to the weight function *ω(p)*:

$$\int\_0^\infty d\overline{p}\,\omega(\overline{p})P\_s(\overline{p}) \simeq \sum\_{k=1}^{\mathcal{Q}} w\_k P\_s(\overline{p}\_k), \qquad \omega(\overline{p}) = \frac{1}{\sqrt{2\pi}} e^{-\overline{p}^2/2},\tag{9.26}$$

where the equality is exact if the number of quadrature points *Q* satisfies 2*Q>s*. The quadrature points *pk* (*k* = 1*,* 2*,...Q*) are the *Q* (positive) roots of the halfrange Hermite polynomial h*Q(p)*, while the quadrature weights *wk* are given by [25, 26, 39, 40]:

$$w\_k^{\mathfrak{h}} = \frac{\overline{p}\_k a\_{\mathcal{Q}}^2}{\mathfrak{h}\_{\mathcal{Q}+1}^2(\overline{p}\_k) \left[\overline{p}\_k + \mathfrak{h}\_{\mathcal{Q},0}^2/\sqrt{2\pi}\right]},\tag{9.27}$$

where *aQ* <sup>=</sup> <sup>h</sup>*Q*+1*,Q*+1*/*h*Q,Q* and <sup>h</sup>*,s* represents the coefficient of *<sup>p</sup><sup>s</sup>* in <sup>h</sup>*(p)*, i.e.,

$$\mathfrak{h}\_{\ell}(\overline{p}) = \sum\_{s=0}^{\ell} \mathfrak{h}\_{\ell,s} \overline{p}^s. \tag{9.28}$$

In our convention, the half-range Hermite polynomials are normalised according to:

$$\int\_0^\infty d\overline{p}\,\omega(\overline{p})\mathfrak{h}\_\ell(\overline{p})\mathfrak{h}\_{\ell'}(\overline{p}) = \delta\_{\ell,\ell'}.\tag{9.29}$$

In order to apply the half-range Gauss–Hermite quadrature prescription, Eq. (9.26), *f* and *f* eq must be expanded with respect to the half-range Hermite polynomials. Since the half-range Hermite polynomials are defined only on half of the momentum axis, *f* can be split with the help of the Heaviside step function *θ (p)* as follows [49]:

$$f(p) = \theta(\overline{p})f^+(p) + \theta(-\overline{p})f^-(p), \qquad \theta(\overline{p}) = \begin{cases} 1, & \overline{p} > 0, \\ 0, & \overline{p} < 0. \end{cases} \tag{9.30}$$

The functions *f* +*(p)* and *f* −*(p)* are defined only on the positive and negative momentum semiaxis, respectively, such that they can be expanded with respect to the half-range Hermite polynomials as follows:

$$f^{+} = \frac{\omega(\overline{p})}{p\_0} \sum\_{\ell=0}^{\infty} \mathcal{F}^{+} \mathfrak{h}\_{\ell}(\overline{p}), \qquad f^{-} = \frac{\omega(-\overline{p})}{p\_0} \sum\_{\ell=0}^{\infty} \mathcal{F}^{-} \mathfrak{h}\_{\ell}(-\overline{p}), \tag{9.31}$$

where the coefficients F <sup>±</sup> can be obtained using the orthogonality given in Eq. (9.29) of the half-range Hermite polynomials:

$$\mathcal{F}\_{\ell}^{+} = \int\_{0}^{\infty} dp \, f(p) \mathfrak{h}\_{\ell}(\overline{p}), \qquad \mathcal{F}\_{\ell}^{-} = \int\_{-\infty}^{0} dp \, f(p) \mathfrak{h}\_{\ell}(-\overline{p}). \tag{9.32}$$

The expansion in Eq. (9.31) with respect to the half-range Hermite polynomials h can be substituted in Eq. (9.25), yielding:

$$
\begin{pmatrix} M\_s^+ \\\\ M\_s^- \end{pmatrix} = p\_0^s \sum\_{\ell=0}^\infty \frac{1}{\ell!} \begin{pmatrix} \mathcal{F}\_\ell^+ \\\\ \mathcal{F}\_\ell^- \end{pmatrix} \int\_0^\infty d\overline{p} \,\omega(\overline{p}) \, \mathfrak{h}(\overline{p}) (\pm \overline{p})^s. \tag{9.33}
$$

Truncating the expansion in Eq. (9.31) at = *Q* − 1 ensures that a quadrature of order *Q* can recover the moments in Eq. (9.33) for 0 ≤ *s* ≤ *Q*. Since *Q* quadrature points are required on each semiaxis of the momentum space, the discrete momentum set of the 1*D* half-range Gauss–Hermite LB model has *K* = Q = 2*Q* elements (twice as in the full-range model of the same order), which are defined as:

$$p\_k = p\_0 \overline{p}\_k, \qquad p\_{k+\mathcal{Q}} = -p\_k \qquad (1 \le k \le \mathcal{Q}).\tag{9.34}$$

Thus, the half-range moments in Eq. (9.25) are recovered as:

$$M\_s^+ = \sum\_{k=1}^{\mathcal{Q}} f\_k p\_k^s, \qquad M\_s^- = \sum\_{k=\mathcal{Q}+1}^{2\mathcal{Q}} f\_k p\_k^s,\tag{9.35}$$

where

$$f\_k = \frac{w\_k^\natural p\_0}{\omega(\overline{p}\_k)} f(p\_k), \qquad f\_{k+\mathcal{Q}} = \frac{w\_k^\natural p\_0}{\omega(\overline{p}\_k)} f(-p\_k) \qquad (1 \le k \le \mathcal{Q}).\tag{9.36}$$

Let us now consider the expansion of *f* eq with respect to the half-range Hermite polynomials, by writing *g(p)* = *θ (p)g*+*(p)* + *θ (*−*p)g*−*(p)*, where

$$g\_{\pm} = \frac{\alpha(|\overline{p}|)}{p\_0} \sum\_{\ell=0}^{\infty} \mathcal{G}\_{\ell}^{\pm} \mathfrak{h}\_{\ell}(|\overline{p}|). \tag{9.37}$$

The expansion coefficients G<sup>±</sup> can be obtained in analogy to Eq. (9.32).

Following the convention of Eq. (9.34), the momentum space is discretised using Q = 2*Q* elements with *pk >* 0 (for the positive semiaxis) and *pk*+*<sup>Q</sup>* = −*pk* (for the negative semiaxis), where 1 ≤ *k* ≤ *Q*. The corresponding equilibrium distributions *f* eq *<sup>k</sup>* <sup>=</sup> *ng*h*,(N ) <sup>k</sup>* are constructed using

$$\mathfrak{g}\_k^{\mathfrak{h},(N)} = w\_k^{\mathfrak{h}} \sum\_{\ell=0}^N \mathcal{G}\_\ell^+ \mathfrak{h}\_\ell(\overline{p}\_k), \qquad \mathfrak{g}\_{k+\mathcal{Q}}^{\mathfrak{h},(N)} = w\_k^{\mathfrak{h}} \sum\_{\ell=0}^N \mathcal{G}\_\ell^- \mathfrak{h}\_\ell(\overline{p}\_k), \tag{9.38}$$

where the expansion order 0 ≤ *N<Q* is a free parameter of the model which represents the order up to which the half-range moments of *f* eq can be exactly recovered. The coefficients G<sup>±</sup> can be found using the orthogonality relation in Eq. (9.29):

$$\mathcal{G}\_{\ell}^{+} = \int\_{0}^{\infty} dp \, \operatorname{g} \, \mathfrak{h}(\overline{p}), \qquad \mathcal{G}\_{\ell}^{-} = \int\_{-\infty}^{0} dp \, \operatorname{g} \, \mathfrak{h}(-\overline{p}). \tag{9.39}$$

The integrals above can be performed analytically, such that *g*h*,(N ) <sup>k</sup>* and *<sup>g</sup>*h*,(N ) k*+*Q* become [25, 26]:

$$\begin{split} \mathcal{g}\_{k}^{\mathfrak{h},(N)} &= \frac{w\_{k}^{\mathfrak{h}}}{2} \sum\_{s=0}^{N} \left( \frac{mT}{2p\_{0}^{2}} \right)^{s/2} \Phi\_{s}^{N}(\overline{p}\_{k}) \left[ (1 + \text{erf}\xi) P\_{s}^{+}(\xi) + \frac{2}{\sqrt{\pi}} e^{-\xi^{2}} P\_{s}^{\*}(\xi) \right], \\ \mathcal{g}\_{k+Q}^{\mathfrak{h},(N)} &= \frac{w\_{k}^{\mathfrak{h}}}{2} \sum\_{s=0}^{N} \left( \frac{mT}{2p\_{0}^{2}} \right)^{s/2} \Phi\_{s}^{N}(\overline{p}\_{k}) \left[ (\text{erfc}\,\xi) P\_{s}^{+}(-\xi) + \frac{2}{\sqrt{\pi}} e^{-\xi^{2}} P\_{s}^{\*}(-\xi) \right], \end{split} \tag{9.40}$$

where *ζ* = *u* <sup>√</sup>*m/*2*KBT* , erf *<sup>ζ</sup>* <sup>=</sup> <sup>√</sup> 2 *π ζ* <sup>0</sup> *dze*−*z*<sup>2</sup> is the error function, *N <sup>s</sup> (pk)* is defined as:

$$\boldsymbol{\Phi}\_s^N(\overline{p}\_k) = \sum\_{\ell=s}^N \mathfrak{h}\_{\ell,s} \mathfrak{h}\_\ell(\overline{p}\_k),\tag{9.41}$$

where h*,s* is defined in Eq. (9.28), while *P* <sup>+</sup> *<sup>s</sup> (ζ )* and *P*<sup>∗</sup> *<sup>s</sup> (ζ )* represent polynomials of orders *s* and *s* − 1, respectively, defined through:

$$P\_s^{\pm}(\zeta) = e^{\mp \zeta^2} \frac{d^s}{d\zeta^s} e^{\pm \zeta^2}, \qquad P\_s^\*(\zeta) = \sum\_{j=0}^{s-1} \binom{s}{j} P\_j^+(\zeta) P\_{s-j-1}^-(\zeta). \tag{9.42}$$

The momentum derivative of *f* can be projected on the space of the half-range Hermite polynomials as discussed in Sect. 9.3.1. Since this projection is not relevant for the further development of this chapter, we refer the reader to Refs. [49, 50] for further details.

# **9.4 LB Models in the Three-Dimensional Momentum Space**

In the three-dimensional (3*D*) momentum space, the discretisation procedure can be conducted using a direct product rule. On each Cartesian axis *α* ∈ {*x, y,z*}, one can choose a specific Gauss–Hermite (full-range or half-range) quadrature of order *Qα*, depending on the characteristics of the flow (e.g., the existence of a noticeable wall-induced discontinuity of the distribution function along the *α* axis). Let *pα,kα* , 1 ≤ *kα* ≤ Q*α*, be the quadrature points on the Cartesian axis *α* (note that Q*<sup>α</sup>* ∈ {*Qα,* 2*Qα*} as mentioned in Sect. 9.3). These quadrature points are the components of the 3*D* vectors **p***k*, *k* = *(kz* − 1*)*Q*x*Q*<sup>y</sup>* + *(ky* − 1*)*Q*<sup>x</sup>* + *kx* , 1 ≤ *k* ≤ *K* = Q*x*Q*y*Q*z*. Following Refs. [25, 26], we generally refer to the resulting models as *mixed quadrature LB models*. The numerical solution of the discretised form Eq. (9.10) of the Boltzmann equation can be obtained following the steps described in Sect. 9.3, which will be detailed further.

In this contribution we restrict ourselves to the LB simulation of Couette and force-driven Poiseuille flows of rarefied gases between parallel plates. In these cases, the flow is homogeneous along the *z* axis and the computational effort can be significantly decreased by taking advantage of the reduced distribution functions introduced in Sect. 9.4.1. Section 9.4.2 discusses the construction of the mixed quadrature LB models for the investigation of rarefied Couette and force-driven Poiseuille flows using the reduced distribution functions and the resulting evolution equations are presented in Sect. 9.4.3. The section ends with a discussion of our non-dimensionalisation convention, presented in Sect. 9.4.4.

# *9.4.1 Reduced Distributions*

In this chapter, we only consider the planar Couette and the force-driven Poiseuille flows between parallel plates. Considering that the walls are perpendicular to the *x* axis, these flows can be considered homogeneous with respect to the *y* and *z* axes, such that the Boltzmann equation (9.1) reduces to:

$$
\partial\_t f + \frac{p\_\chi}{m} \partial\_\chi f + F\_\chi \nabla\_{p\_\chi} f = -\frac{1}{\pi} (f - f^{\text{eq}}).\tag{9.43}
$$

The force term is present only in the case of the Poiseuille flow. Assuming that the fluid flows along the *y* direction, the only non-vanishing component of the force is along the *y* axis (see Sect. 9.5.2 for more details).

Since the flows considered in this chapter are trivial with respect to the *z* axis, the *pz* degree of freedom can be eliminated from Eq. (9.43) [44, 51]. This helps to reduce the computational costs, especially when dealing with LB models involving high order quadratures. Defining:

$$\phi = \int\_{-\infty}^{\infty} dp\_{\varepsilon} \, f, \qquad \chi = \int\_{-\infty}^{\infty} dp\_{\varepsilon} \, f \, \frac{p\_{\varepsilon}^2}{m}, \tag{9.44}$$

the following two equations are obtained:

$$
\partial\_t \begin{pmatrix} \phi \\ \chi \end{pmatrix} + \frac{p\_\chi}{m} \partial\_\chi \begin{pmatrix} \phi \\ \chi \end{pmatrix} + F\_\chi \frac{\partial}{\partial p\_\chi} \begin{pmatrix} \phi \\ \chi \end{pmatrix} = -\frac{1}{\pi} \begin{pmatrix} \phi - \phi^{\text{eq}} \\ \chi - \chi^{\text{eq}} \end{pmatrix}, \tag{9.45}
$$

where *<sup>χ</sup>*eq <sup>=</sup> *kBT φ*eq and *<sup>φ</sup>*eq can be factorised using the functions *gα* Eq. (9.3) as follows:

$$\phi^{\rm eq} = n g\_{\rm x} g\_{\rm y} = \frac{n}{2\pi m k\_B T} \exp\left[ -\frac{(p\_{\rm x} - m u\_{\rm x})^2 + (p\_{\rm y} - m u\_{\rm y})^2}{2mk\_B T} \right]. \tag{9.46}$$

Note that the reduction procedure introduced above can be used also for the 3*D* pressure-driven Poiseuille flow between parallel plates, provided that there are no variations along the *z* axis.

# *9.4.2 Mixed Quadrature LB Models with Reduced Distribution Functions*

In the mixed quadrature LB models, the momentum space is constructed using a direct product rule. This allows the quadrature on each axis to be constructed independently by taking into account the characteristics of the flow. When the gas flow is homogeneous along the *z* axis, the reduced distribution functions evolve in a two-dimensional space and thus, the elements of the discrete set of momentum vectors can be written as **p***ij* = *(px,i, py,j )*. The indices *i* and *j* run from 1 to Q*<sup>α</sup>* (*α* ∈ {*x, y*}), where Q*<sup>α</sup>* = *Qα* or Q*<sup>α</sup>* = 2*Qα* when a full-range or a half-range quadrature of order *Qα* is employed on the *α* axis. As shown in Refs. [25, 26], a fullrange Gauss–Hermite quadrature of order *Qy* = 4 is sufficient on the *y* axis in order to capture exactly the evolution of the velocity, temperature and of heat flux fields. For low Mach flows, the quadrature order *Qx* can be taken to be *Qx* = 4 in the Navier–Stokes regime, where the full-range Gauss–Hermite quadrature is efficient. As Kn is increased, *Qx* must also be increased in order to retain the accuracy of the simulation results. In the case of the channel flows considered in this chapter, the discontinuity in the distribution functions *φ* and *χ* induced by the diffuse-reflective walls becomes significant for sufficiently large Kn. Hence, the full-range Gauss– Hermite quadrature on the *x* axis becomes inefficient compared to the half-range Gauss–Hermite quadrature, as demonstrated in Refs. [25, 26]. In this chapter, we only consider the half-range Gauss–Hermite quadrature of order *Qx* on the *x* axis. The resulting models are denoted HHLB*(Qx )* × HLB*(*4*)* following the convention in Ref. [26], employing 8*Qx* velocities and 16*Qx* distinct populations (*φij* and *χij* ), as discussed below.

# *9.4.3 The Lattice Boltzmann Equation*

The reduced distribution functions *φij* and *χij* corresponding to the momentum vector **p***ij* = *(px,i, py,j )* are linked to *φ* and *χ* through the direct extension of Eq. (9.36):

$$
\begin{pmatrix} \phi\_{lj} \\\\ \chi\_{lj} \end{pmatrix} = \left( \frac{w\_i^\times p\_{0,\chi}}{\omega(\overline{p}\_{\ge,i})} \right) \left( \frac{w\_j^\times p\_{0,\chi}}{\omega(\overline{p}\_{\ge,j})} \right) \begin{pmatrix} \phi^{\mathcal{Q}\_\ge,\mathcal{Q}\_\ge}(p\_{\ge,l}, p\_{\ge,j}) \\\\ \chi^{\mathcal{Q}\_\ge,\mathcal{Q}\_\ge}(p\_{\ge,l}, p\_{\ge,j}) \end{pmatrix}. \tag{9.47}
$$

The weights *w<sup>x</sup> <sup>i</sup>* and *<sup>w</sup><sup>y</sup> <sup>j</sup>* are given by Eqs. (9.27) and (9.20), respectively. After the discretisation of the momentum space, Eq. (9.45) becomes:

$$
\partial\_l \begin{pmatrix} \phi\_{lj} \\\\ \chi\_{lj} \end{pmatrix} + \frac{p\_{\chi,l}}{m} \partial\_{\chi} \begin{pmatrix} \phi\_{lj} \\\\ \chi\_{lj} \end{pmatrix} + F\_{\mathbf{y}} \sum\_{j'=1}^{\mathcal{Q}\_{\mathbf{y}}} \mathcal{K}\_{j,j'} \begin{pmatrix} \phi\_{l,j'} \\\\ \chi\_{l,j'} \end{pmatrix} = -\frac{1}{\tau} \begin{pmatrix} \phi\_{lj} - \phi\_{lj}^{\mathrm{eq}} \\\\ \chi\_{lj} - \chi\_{lj}^{\mathrm{eq}} \end{pmatrix},\tag{9.48}
$$

where the kernel K*j,j* is given in Eq. (9.24). In particular, for the case *Qy* = 4 considered in this chapter, K*j,j* has the following elements:

$$\mathcal{K}\_{f,f'} $$

$$=\frac{1}{p\_{0,\text{y}}}\begin{pmatrix}\frac{1}{2}\sqrt{3+\sqrt{6}} & \frac{\sqrt{3+\sqrt{3}}}{2(3+\sqrt{6})} & -\frac{\sqrt{3-\sqrt{3}}}{2(3+\sqrt{6})} & \frac{1}{2}\sqrt{1-\sqrt{\frac{2}{3}}}\\ -\sqrt{\frac{3+2\sqrt{6}}{2(3-\sqrt{3})}} & \frac{1}{2}\sqrt{3-\sqrt{6}} & \frac{1}{2}\sqrt{1+\sqrt{\frac{2}{3}}} & -\frac{\sqrt{27+11\sqrt{6}}-\sqrt{3+\sqrt{6}}}{2\sqrt{6}}\\ \frac{\sqrt{27+11\sqrt{6}}-\sqrt{3+\sqrt{6}}}{2\sqrt{6}} & -\frac{1}{2}\sqrt{1+\sqrt{\frac{2}{3}}} & -\frac{1}{2}\sqrt{3-\sqrt{6}} & \frac{\sqrt{27+11\sqrt{6}}+\sqrt{3+\sqrt{6}}}{2\sqrt{6}}\\ -\frac{\sqrt{3-\sqrt{6}}}{2\sqrt{3}} & \frac{\sqrt{3-\sqrt{3}}}{2(3+\sqrt{6})} & -\frac{\sqrt{3+\sqrt{3}}}{2(3+\sqrt{6})} & -\frac{1}{2}\sqrt{3+\sqrt{6}}\end{pmatrix}.\tag{9.49}$$

Numerically, the above expression reduces to:

$$\mathcal{H}\_{j,j'} \simeq \frac{1}{p\_{0,y}} \begin{pmatrix} 1.1672 & 0.1996 \ -0.1033 & 0.2142 \\ -1.9757 & 0.3710 & 0.6739 \ -1.0227 \\ 1.0227 & -0.6739 & -0.3710 & 1.9757 \\ -0.2142 & 0.1033 & -0.1996 & -1.1672 \end{pmatrix} . \tag{9.50}$$

The equilibrium distribution *φ*eq *ij* <sup>=</sup> *ng*h*,(Nx ) <sup>i</sup> g H,(Ny ) <sup>j</sup>* is obtained as the product between the expansions *g*h*,(Nx ) <sup>i</sup>* from Eq. (9.40) and *g H,(Ny ) <sup>j</sup>* from Eq. (9.22), performed with respect to the half-range and full-range Hermite polynomials, respectively. For this particular case, the orders of the expansions are *Nx* = 3 and *Ny* <sup>=</sup> 3. For definiteness, we list below the exact expression for *<sup>g</sup>H,(*3*) <sup>j</sup>* :

$$\begin{split} \mathcal{g}\_{j}^{H,(3)} &= w\_{j}^{H} \Big[ 1 + \overline{p}\_{\text{y},j} \mathfrak{U}\_{\text{y}} + \frac{1}{2} (\overline{p}\_{\text{y},j}^{2} - 1)(\mathfrak{U}\_{\text{y}}^{2} + \mathfrak{T}\_{\text{y}}) \\ &+ \frac{1}{6} \Big( \overline{p}\_{\text{y},j}^{3} - 3 \overline{p}\_{\text{y},j} \Big) \mathfrak{U}\_{\text{y}} (\mathfrak{U}\_{\text{y}}^{2} + 3 \mathfrak{T}\_{\text{y}}) \Big], \end{split} \tag{9.51}$$

where U*<sup>y</sup>* and T*<sup>y</sup>* are defined as [26]:

$$\mathfrak{U}\_{\rm y} = \frac{mu\_{\rm y}}{p\_{0,\rm y}}, \qquad \mathfrak{T}\_{\rm y} = \frac{mK\_B T}{p\_{0,\rm x}^2} - 1. \tag{9.52}$$

Similarly, *g*h*,(*3*) <sup>i</sup>* is given by Ambrus, and Sofonea [25]:

$$\begin{split} s\_{l}^{\mathfrak{h},(3)} &= \frac{w\_{l}^{\mathfrak{h}}}{2} \Big\{ (1 + \operatorname{erf}\zeta\_{\boldsymbol{x},l}) \Big[ \Phi\_{0}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) + 2\xi\_{\boldsymbol{x},l}\mathcal{T}\_{\boldsymbol{x}}\Phi\_{1}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) \Big] \\ &+ 2\mathcal{T}\_{\boldsymbol{x}}^{2}(2\xi\_{\boldsymbol{x},l}^{2} + 1)\Phi\_{2}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) + 4\xi\_{\boldsymbol{x},l}\mathcal{T}\_{\boldsymbol{x}}^{3}(2\xi\_{\boldsymbol{x},l}^{2} + 3)\Phi\_{3}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) \Big] \\ &+ \frac{2e^{-\xi\_{\boldsymbol{x}}^{2}}}{\sqrt{\pi}}\mathcal{T}\_{\boldsymbol{x}}\Big[ \Phi\_{1}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) + 2\xi\_{\boldsymbol{x},l}\mathcal{T}\_{\boldsymbol{x}}\Phi\_{2}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) \Big] \\ &+ 4\mathcal{T}\_{\boldsymbol{x}}^{2}(\xi\_{\boldsymbol{x},l}^{2} + 1)\Phi\_{3}^{3}(|\overline{p}\_{\boldsymbol{x},l}|) \Big] \Big\}, \end{split} (9.53))$$

where *ζx,i* <sup>=</sup> *uxσx,i*√*m/*2*KBT* , *σx,i* is the sign of *px,i* and <sup>T</sup>*<sup>x</sup>* <sup>=</sup> + *mKBT /*2*p*<sup>2</sup> <sup>0</sup>*,x* , while the functions <sup>3</sup> *s(z)* are given below:

$$\begin{aligned} \Phi\_0^3(z) &= \frac{2\pi(9\pi - 28) - z\sqrt{2\pi}(21\pi - 64) + 2\pi z^2(10 - 3\pi) - z^3\sqrt{2\pi}(16 - 5\pi)}{32 - 29\pi + 6\pi^2}, \\\Phi\_1^3(z) &= \frac{2\pi z(15\pi - 44) - \sqrt{2\pi}(21\pi - 64) - z^2\sqrt{2\pi}(16 - 3\pi) + 2\pi z^3(10 - 3\pi)}{32 - 29\pi + 6\pi^2}, \\\Phi\_2^3(z) &= \frac{2\pi(10 - 3\pi) - z\sqrt{2\pi}(16 - 3\pi) + 2\pi z^2(3\pi - 7) - z^3\sqrt{2\pi}(3\pi - 8)}{32 - 29\pi + 6\pi^2}, \end{aligned}$$

9 Quadrature-Based Lattice Boltzmann Models for Rarefied Gas Flow 285

$$\Phi\_3^3(z) = \frac{-\sqrt{2\pi}(16 - 5\pi) + 2\pi z(10 - 3\pi) - z^2\sqrt{2\pi}(3\pi - 8) + 2\pi z^3(\pi - 3)}{32 - 29\pi + 6\pi^2}.\tag{9.54}$$

Finally, the macroscopic moments Eq. (9.4) can be written in terms of *φij* and *χij* as follows:

$$n = \sum\_{i,j} \phi\_{ij} = \sum\_{i,j} \phi\_{ij}^{\text{eq}}, \qquad \begin{pmatrix} \rho u\_{\text{x}} \\ \rho u\_{\text{y}} \end{pmatrix} = \sum\_{i,j} \phi\_{ij} \begin{pmatrix} p\_{x,i} \\ p\_{y,j} \end{pmatrix} = \sum\_{i,j} \phi\_{ij}^{\text{eq}} \begin{pmatrix} p\_{x,i} \\ p\_{y,j} \end{pmatrix},$$

$$\frac{3}{2} n K\_B T + \frac{1}{2} \rho \mathbf{u}^2 = \sum\_{i,j} \left[ \phi\_{ij} \frac{p\_{x,i}^2 + p\_{y,j}^2}{2m} + \frac{1}{2} \chi\_{ij} \right]$$

$$= \sum\_{i,j} \left[ \phi\_{ij}^{\text{eq}} \frac{p\_{x,i}^2 + p\_{y,j}^2}{2m} + \frac{1}{2} \chi\_{ij}^{\text{eq}} \right]. \tag{9.55}$$

It can be seen that *χij* appears only in the definition of the temperature field. It is essential to track the evolution of *φij* and *χij* simultaneously in order to correctly compute the temperature field appearing in the definition of *φ*eq given in Eq. (9.46), as well as in the definition of *χ*eq.

# *9.4.4 Non-Dimensionalisation Procedure*

In order to perform numerical simulations, we non-dimensionalise all quantities with respect to the following parameters:


$$N\_{\text{tot}} = \int\_{-1/2}^{1/2} dx \, n(\mathbf{x}) = 1. \tag{9.56}$$

The reference time is *t*ref = *L/c*ref and we set *p*0*,x* = *p*0*,y* = 1 for the rest of this chapter. With the above conventions, the relaxation time *τ* is set to

$$
\pi = \frac{\text{Kn}}{nT},
\tag{9.57}
$$

which ensures that the viscosity *μ* = *τ nT* = Kn is constant throughout the simulation domain.

# **9.5 Simulation Results**

The advantage of the quadrature-based approach to LB modelling quickly becomes apparent when considering rarefied flows. An excellent arena for this type of tests is represented by channel flows. In particular, we will restrict the discussion to the Couette and the force-driven Poiseuille flows between parallel plates, which have become canonical benchmark problems in the microfluidics community. In the context of these flows, the distribution function becomes discontinuous due to the diffuse reflection interaction with the boundary. Thus, at large values of Kn, halfrange quadratures are much more efficient than the more traditional full-range ones [25, 26, 52, 53]. More complex flows, where the application of half-range Gauss– Hermite quadrature is essential are investigated in Refs. [50, 54, 55].

# *9.5.1 Couette Flow Between Parallel Plates*

In this section, we consider the Couette flow between parallel plates. The geometry of this flow can be seen in Fig. 9.1. The system consists of two parallel plates at

**Fig. 9.1** *Left:* Setup for the Couette flow problem, highlighting the slip velocity *u*slip = *u*<sup>w</sup> − *u(*1*/*2*)*. *Right:* Boundary conditions and grid characteristics. The fine dotted lines show a grid comprised of *S* = 8 cells, stretched according to Eq. (9.58) with *A* = 0*.*95. Only one cell is used along the *y* direction

rest located at *<sup>x</sup>* = ±1*/*2.<sup>1</sup> The gas between these plates is initially in thermal equilibrium at the wall temperature *Tw* = 1. At *t* = 0, the left and right plates are set into motion with velocities −*u<sup>w</sup>* = *(*0*,* −*uw,* 0*)* and *u<sup>w</sup>* = *(*0*, uw,* 0*)*, respectively, as shown in Fig. 9.1 (left). The evolution of the fluid is simulated using the LB algorithm described in Sect. 9.4, until the stationary state is reached. The analysis presented in this section is restricted to the stationary state.

In the stationary state of Couette flow, rarefied gases exhibit a non-linear velocity profile in the proximity of the moving walls. This nonlinearity originates from the wall-induced discontinuity of the particle distribution function and its spatial extension (i.e., the width of the so-called Knudsen layer, where the discontinuity induced through interparticle collisions) is of the order of the mean free path of the fluid particles [1, 2]. Diffuse reflection boundary conditions are used to capture this wall-induced discontinuity [24–26], as shown in Fig. 9.1 (right).

Mathematically, diffuse reflection boundary conditions entail that the distribution functions for the particles emerging from the walls back into the fluid satisfy *f (x* = ±*L/*2*, <sup>p</sup>,t)* <sup>=</sup> *<sup>f</sup>* eq*(p*; ±*uw)*, which is valid for <sup>±</sup>*px <sup>&</sup>lt;* 0, respectively. Noting that *<sup>f</sup>* eq*(*−*p*; *<sup>u</sup>)* <sup>=</sup> *<sup>f</sup>* eq*(p*; −*u)*, it can be seen that the solution of the Boltzmann equation (9.12) possesses the symmetry *f (*−*x, p,t)* = *f (x,* −*p,t)*. This symmetry allows only the right half of the channel to be considered, provided that bounce back boundary conditions are implemented at the channel centreline [i.e., *f (*0*,* −*p,t)* = *f (*0*, p,t)*]. This simplification effectively halves all computation times. Moreover, since the system is homogeneous along the *y* axis, no advection is performed in this direction and a discretisation using a single node is sufficient. In fact, this corresponds to implementing periodic boundary conditions along the *y* axis. The *pz* degree of freedom is integrated out, as explained in Sect. 9.4.1, and no advection is performed along the *z* direction. The right panel of Fig. 9.1 presents schematically the implementation of the Couette flow geometry.

In order to capture the Knudsen layer, it is convenient to use a grid which is more refined in the vicinity of the wall. This can be achieved by employing a standard grid-stretching procedure [56, 57]. In this chapter, we follow Refs. [50, 54, 55] and perform an equidistant grid discretisation with respect to the non-dimensional parameter *η*, defined through:

$$
\ln \mathbf{x}(\eta) = \frac{1}{2A} \tanh \eta,\tag{9.58}
$$

where 0 ≤ *η* ≤ arctanh*(A)* and 0 *<A<* 1 controls the stretching such that when *A* → 0, the grid becomes equidistant with respect to *x*, while as *A* → 1, the grid points accumulate towards the right boundary. For a discretisation employing *S* points, we have:

<sup>1</sup>All quantities presented in this section are non-dimensionalised according to the conventions presented in Sect. 9.4.4.

$$
\eta\_s = \frac{1}{S} \left( s - \frac{1}{2} \right) \text{arctanh}(A), \qquad \mathbf{x}\_s = \frac{1}{2A} \tanh \eta\_s,\tag{9.59}
$$

where the points with 1 ≤ *s* ≤ *S* lie within the flow domain. For the simulations presented in this section, we found that *S* = 16 points with *A* = 0*.*98 are sufficient to yield accurate results. The stretching procedure is illustrated in Fig. 9.1 (right) for a grid with *S* = 8 cells, when *A* = 0*.*95.

In order to employ the finite-difference scheme described in the Appendix, three ghost nodes are required on either side of the simulation domain. The bounce back boundary conditions [17, 19, 20] employed on the left side of the domain can be written as:

$$
\phi\_{0;lj} = \phi\_{l; \widetilde{i}\widetilde{j}}, \qquad \phi\_{-1;lj} = \phi\_{2; \widetilde{i}\widetilde{j}}, \qquad \phi\_{-2;lj} = \phi\_{\widetilde{3}; \widetilde{i}\widetilde{j}}, \tag{9.60}
$$

and similarly for *χs*;*ij* . The notation*<sup>ı</sup>* (*<sup>j</sup>* ) refers to the component *px,<sup>ı</sup>* (*py,<sup>j</sup>* ) defined through:

$$p\_{\chi,\widetilde{\mathfrak{J}}} = -p\_{\chi,\widetilde{\mathfrak{J}}}, \qquad p\_{\overline{\chi},\widetilde{J}} = -p\_{\overline{\chi},\dot{J}}.\tag{9.61}$$

On the right boundary, the diffuse reflection concept [24–26] is imposed. This requires that the flux of particles coming from the boundary cell at *<sup>s</sup>* <sup>=</sup> *<sup>S</sup>* <sup>+</sup> <sup>1</sup> 2 towards the first fluid node at *s* = *S* is Maxwellian:

$$
\begin{pmatrix}
\Phi\_{S+\frac{1}{2};ij} \\
X\_{S+\frac{1}{2};ij}
\end{pmatrix} = \frac{p\_{x,l}}{m} \begin{pmatrix}
\phi\_{w;ij}^{\text{eq}} \\
\chi\_{w;ij}^{\text{eq}}
\end{pmatrix}, \qquad p\_{l,x} < 0,\tag{9.62}
$$

where *φ*eq *<sup>w</sup>*;*ij* is the reduced equilibrium distribution Eq. (9.46) corresponding to the wall parameters *nw*, *<sup>u</sup><sup>w</sup>* and *Tw* <sup>=</sup> 1 and *<sup>χ</sup>*eq *<sup>w</sup>*;*ij* <sup>=</sup> *<sup>φ</sup>*eq *<sup>w</sup>*;*ij* . In the above, the notations *S*<sup>+</sup> <sup>1</sup> <sup>2</sup> ;*ij* and *XS*<sup>+</sup> <sup>1</sup> <sup>2</sup> ;*ij* represent the fluxes corresponding to the reduced distributions *φij* and *χij* , which can be computed using Eq. (9.76) by replacing *px* with *px,i* and *Fs* with *φij* ;*<sup>s</sup>* and *χij* ;*s*, as required. Equation (9.62) can be achieved in the frame of the WENO-5 scheme [50] described in the Appendix, when

$$
\phi\_{\mathbf{S}+1;lj} = \phi\_{\mathbf{S}+2;lj} = \phi\_{\mathbf{S}+3;lj} = \phi\_{w;lj}^{\text{eq}}, \qquad p\_{\mathbf{x},l} < 0. \tag{9.63}
$$

Similar relations hold also for *χs*;*ij* . The distributions of the particles travelling from the fluid towards the wall are obtained by quadratic extrapolation with respect to the equidistant *η* coordinate from the fluid towards the wall:

$$
\phi\_{S+1;ij} = 3\phi\_{S;ij} - 3\phi\_{S-1;ij} + \phi\_{S-2;ij}, \quad \phi\_{S+2;ij} = 6\phi\_{S;ij} - 8\phi\_{S-1;ij} + 3\phi\_{S-2;ij}.\tag{9.64}
$$

The same relations are valid for *χs*;*ij* . The wall density *nw* can be obtained by imposing mass conservation:

$$\sum\_{i,j} \Phi\_{S+\frac{1}{2};ij} = 0 \Rightarrow n\_w = -\frac{\sum\_{\substack{i,j, p\_{x,i} > 0 \\ \sum\_{i,j, p\_{x,i} < 0}}} \Phi\_{S+\frac{1}{2};ij}}{\sum\_{n\_w}^{\text{eq}} \frac{\phi\_{w;ij}^{\text{eq}}}{n\_w} \frac{p\_{x,i}}{m}}.\tag{9.65}$$

It can be seen that the accurate computation of *nw* requires the recovery of halfspace quadrature sums, which is the reason why we choose the half-range Gauss– Hermite quadrature on the *x* axis.

We illustrate the capabilities of our models by considering the velocity profile for a low Mach number flow (*uw* = 0*.*1) and perform simulations at various values of Kn. While the Navier–Stokes equations predict a straight-line velocity profile *uy* = 2*xuw/L* [60, 61], the kinetic analysis shows that in the vicinity of the boundary, there is always a Knudsen layer, having an extension of the order of the particle mean free path, where the velocity profile curves along the wall [42]. Figure 9.2 (left) shows the excellent agreement between our LB results and the benchmark linearised Boltzmann–BGK results reported in Ref. [58]. These results are also presented in Ref. [53], but with less accuracy and for a smaller range of values of Kn. The dependence of the slip velocity with respect to Kn is shown in Fig. 9.2 (right), where our results are compared with the linearised Boltzmann– BGK results reported in Refs. [58, 59]. Excellent agreement is found in both cases. In order to compare our simulation results with those reported in Ref. [58], we employed the relation Kn <sup>=</sup> *k/*√2 between the Knudsen number defined in Eq. (9.57) and the parameter *k* employed in Ref. [58]. The quadrature orders used in these simulations were *Qx* = 4 (*k* ≤ 0*.*1), 5 (*k* = 0*.*3), 10 (*k* = 1), 11 (*k* = 2), 20

**Fig. 9.2** Validation of the LB results (lines) in the context of the Couette flow. *Left:* Comparison of the velocity profile in the half-channel 0 ≤ *x* ≤ 1*/*2 with the benchmark results reported in Ref. [58] at various values of *<sup>k</sup>* <sup>=</sup> Kn√2 (points). *Right:* Comparison of the slip velocity as a function of Kn with the linearised Boltzmann–BGK results reported in Refs. [58] (circles) and [59] (squares)

(*k* = 5) and 40 (*k* = 30). As *k* is increased, the fluid velocity at the wall approaches 0 (its free-streaming value). Since the slip velocity can be recovered accurately even if the velocity profile presents visible deviations with respect to the benchmark data, the results presented in the right panel of Fig. 9.2 were obtained using a quadrature order *Qx* = 21 for all values of Kn.

# *9.5.2 Force-Driven Poiseuille Flow Between Parallel Plates*

In this section, we consider the force-driven Poiseuille flow between parallel plates. The geometry of this flow can be seen in Fig. 9.3 (left). The system consists of two parallel plates at rest which are taken to be perpendicular to the *x* axis. The gas between these plates is initially in thermal equilibrium at the wall temperature *Tw* = 1. At *t* = 0, a constant force *F* = *(*0*, ma,* 0*)* is applied throughout the fluid domain. According to the non-dimensionalisation discussed in Sect. 9.4.4, the acceleration *a* is expressed in units of *c*<sup>2</sup> ref*/L* and *m* = 1. The evolution of the fluid is simulated using the LB algorithm presented in Sect. 9.4.

The flow geometry, the boundary conditions and the Boltzmann equation (9.12), possess the symmetry property *f (*−*x,px , py ,t)* = *f (x,* −*px , py ,t)*. As was the case for the Couette flow, this symmetry allows only half of the channel to be simulated (0 <sup>≤</sup> *<sup>x</sup>* <sup>≤</sup> <sup>1</sup> <sup>2</sup> ), while the symmetry *f (*0*, px , py ,t)* = *f (*0*,* −*px , py ,t)* is ensured using specular boundary conditions [17, 19, 20], as shown in Fig. 9.3 (right). In order to implement specular boundary conditions, the distribution functions in the nodes to the left of the flow domain, having indices *s* = 0*,* −1*,* −2, are populated according to:

**Fig. 9.3** *Left:* Setup for the force-driven Poiseuille flow problem, highlighting the slip velocity *u*slip = *u(*1*/*2*)*. The mass flow rate is shown in the shaded area. *Right:* Boundary conditions and grid characteristics. The fine dotted lines show a grid comprised of *S* = 8 cells, stretched according to Eq. (9.58) with *A* = 0*.*95. Only one cell is used along the *y* direction

$$
\phi\_{0;ij} = \phi\_{1;\widetilde{i}j}, \qquad \phi\_{-1;ij} = \phi\_{2;\widetilde{i}j}, \qquad \phi\_{-2;ij} = \phi\_{3;\widetilde{i}j}, \tag{9.66}
$$

where only the *x* component of the momentum is reversed in the right-hand side of the above equations, as shown in Eq. (9.61). On the right boundary, the diffuse reflection concept is imposed, as discussed in Sect. 9.5.1. Furthermore, the grid is stretched using Eq. (9.58), with *A* = 0*.*98. In order to accurately capture the main features of the flow, we employed *S* = 32 nodes along the *x* axis, distributed in the right half of the channel.

In the case of the force-driven Poiseuille flow, we discuss two features which manifest at non-negligible values of Kn. The first one refers to the Knudsen paradox, according to which the flow rate through the channel decreases with Kn from its value in the Navier–Stokes limit down to a minimum, after which it increases towards infinity as the ballistic regime settles in. The non-dimensionalised mass flow rate *Q*flow can be computed as follows:

$$Q\_{\rm flow} = \frac{\sqrt{8}}{a\sqrt{\pi}} \int\_{-1/2}^{1/2} dx \,\rho(\mathbf{x}) u\_{\mathbf{y}}(\mathbf{x}). \tag{9.67}$$

For small values of Kn, Cercignani [62] derived the following approximation for *Q*flow:

$$\mathcal{Q}\_{\text{flow}} \simeq \frac{1}{6\widetilde{\mathbf{K}\mathbf{n}}} + \mathbf{s} + (\mathbf{s}^2 - 1)\widetilde{\mathbf{K}\mathbf{n}},\tag{9.68}$$

where *<sup>s</sup>* <sup>=</sup> <sup>1</sup>*.*01615 and Kn is defined as: ;

$$
\widetilde{\mathbf{Kn}} = \mathbf{Kn} \sqrt{\frac{\pi}{2}}.\tag{9.69}
$$

While accurate at small values of Kn, Eq. (9.68) predicts a linear increase of *Q*flow with Kn, which is not confirmed by experiments or numerical simulations. ;

An empirical fitting formula was given by Sharipov in Eq. (11.136) of Ref. [9]:

$$G\_P^\* = -\frac{\ln \delta}{\sqrt{\pi}} + 0.376 - (1.77 \ln \delta + 0.584)\delta + 2.12 \delta^2. \tag{9.70}$$

In this formula, which extends the asymptotic term <sup>−</sup> ln *δ/*√*<sup>π</sup>* derived by Cercignani [62], the rarefaction parameter *δ* and *G*∗ *<sup>P</sup>* are related to Kn and *Q*flow through:

$$\delta = \frac{1}{\text{Kn}\sqrt{2}} = \frac{\sqrt{\pi}}{2\widetilde{\text{Kn}}}, \qquad \mathcal{Q}\_{\text{flow}} = G\_P^\* \sqrt{\frac{4}{\pi}}.\tag{9.71}$$

Our numerical results for *Q*flow, together with the approximations Eq. (9.68) and Eq. (9.71), as well as various other semi-analytical or numerical results are shown in Fig. 9.4 (left). These results were obtained using the mixed LB model described in Sect. 9.4.2 with the order of the half-range Gauss–Hermite quadrature set to *Qx* = 21. Since the velocity profile, and hence the mass flow rate, do not scale linearly with *a* at large values of *a*, we used *a* = 0*.*01 throughout the simulations in order to ensure good agreement with the validation data.

The second remarkable microfluidics specific effect occurring in the forcedriven Poiseuille flow refers to the development of a dip (local minimum) in the temperature profile *T (x)* at the centre *x* = 0 of the channel. The dip occurrence was predicted by the kinetic theory at the super-Burnett level and observed by DSMC simulations [1, 65–71].

Using a moments method approach, Mansour et al. [67, 68] derived analytically the following dependence of the temperature profile on the distance *x* from the centre of the channel:

$$T(\mathbf{x}) = T\_0 + \alpha \mathbf{x}^2 + \beta \mathbf{x}^4. \tag{9.72}$$

Using a numerical fit, we found an excellent match between the above functional form and our simulation results. For clarity, Fig. 9.4 (right) shows the half-channel profile of [*T (x)* − 1]*/(T*<sup>0</sup> − 1*)*, where *Tw* = 1 is the wall temperature and *T*<sup>0</sup> represents the temperature at the centre of the channel, as determined by fitting Eq. (9.72) to the numerical data. The values of the parameters *T*0, *α* and *β* for the values of Kn considered in Fig. 9.4 (right) are given in Table 9.1. In these simulations

**Fig. 9.4** Validation of the LB results in the context of the force-driven Poiseuille flow. *Left:* Comparison between the LB results (continuous line) for the flow rate *Q*flow, defined in Eq. (9.67), and the asymptotic formulae in Eqs. (9.67) and (9.68) due to Cercignani [62] and Sharipov [9] (dashed lines), the results of Cercignani, Lampis and Lorenzani (CLL) [63] (dashed line with filled squares), the DVM results from Ref. [64] (hollow circles), as well as the DSMC results reported by Feuchter and Scheifenbaum in Ref. [52] (filled circles). The results are represented with respect to Kn, defined in Eq. ( ; 9.69). *Right:* Illustration of the dip in the temperature profile at various values of Kn. The lines represent the best fits of the analytic expression, Eq. (9.72) to the LB results (points), as described in Sect. 9.5.2 and in Table 9.1


Only the points inside the domain 0 *< x/L <* 0*.*4 are taken into account when performing the fit

we used *a* = 0*.*05 in order to enhance the development of the temperature dip. We employed the mixed quadrature LB model described in Sect. 9.4.2 where we set *Qx* = 4 for Kn ∈ {0*.*032*,* 0*.*05*,* 0*.*1}, while at Kn = 0*.*2, we used *Qx* = 7.

# **9.6 Conclusions**

In this chapter, we presented a systematic procedure for the construction of high order mixed quadrature LB models based on the full-range and half-range Gauss– Hermite quadratures. A particular attention was given to the case when the flow is homogeneous along the *z* axis, when reduced distribution functions can be used in order to minimise the computational effort. The capabilities of these models are demonstrated in the context of the Couette and force-driven Poiseuille flows between parallel plates at various values of Kn. Excellent agreement is found between our results and benchmark data available in the literature from the Navier– Stokes level up to the transition regime.

The success of our models relies on the accurate recovery of half-range integrals required for the implementation of diffuse reflection. Such integrals are exactly recovered by employing the half-range Gauss–Hermite quadrature.

Our numerical method for solving the LB evolution equations employs finitedifference techniques. In particular, we implemented the advection using the fifthorder weighted essentially non-oscillatory (WENO-5) scheme and the time-stepping was performed using the third-order Runge–Kutta (RK-3) method. This allowed us to obtain accurate results using a small number of nodes (16 for the Couette flow and 32 for the force-driven Poiseuille flow).

Taking advantage of the homogeneity of the flows studied in this chapter with respect to the *z* axis, we eliminated the *z* axis degree of freedom by integrating the Boltzmann–BGK equation with respect to *pz*. In order to correctly track the evolution of the temperature and heat flux fields, we employed two reduced distribution functions, *φ* and *χ*, obtained by integrating with respect to *pz* the Boltzmann distribution *f* multiplied by 1 and *p*<sup>2</sup> *<sup>z</sup> /m*, respectively. The extension of the methodology presented in this chapter to more complex flow domains is straightforward since the mixed quadrature paradigm allows the type of quadrature and the quadrature orders to be adjusted for each axis separately. The treatment of complex boundaries can be performed by using the standard staircase approximation [72, 73] or the more recent vielbein approach [50]. Finally, more complex relaxation time models, such as the Shakhov model [74, 75], can be implemented as described in, e.g., Refs. [54, 55].

We conclude that the models described in this chapter can be used to obtain numerical solutions of the Boltzmann–BGK equation for channel flows at arbitrary values of the Knudsen number.

# **Appendix: Numerical Scheme**

The simulation results presented in this chapter were obtained using an explicit third-order total variation diminishing (TVD) Runge–Kutta (RK-3) time marching procedure [76–79], together with the fifth-order weighted essentially non-oscillatory (WENO-5) scheme [80, 81] for computing the advection.

In order to implement the time-stepping algorithm, it is convenient to cast the Boltzmann–BGK equation (9.1) in the following form:

$$
\partial\_t f = L[f], \qquad L[f] = -\frac{\mathbf{p}}{m} \cdot \nabla f - \mathbf{F} \cdot \nabla\_\mathbf{p} f - \frac{1}{\tau} (f - f^{\text{eq}}). \tag{9.73}
$$

Following the discretisation of the time variable using equal time steps *δt*, the distribution function at time step *l* is *fl* ≡ *f (tl)*, when the time coordinate has the value *tl* = *lδt*, taken with respect to the initial time *t*<sup>0</sup> = 0. For simplicity, the dependence of the distribution function on the spatial coordinates and on the momentum degrees of freedom was omitted. The third-order Runge–Kutta TVD integrator described using the Butcher tableau summarised in Table 9.2 gives the following algorithm for computing the value *fl*+<sup>1</sup> of the distribution function at time *tl*+1:

$$f\_l^{(1)} = f\_l + \delta t \, L[f\_l],$$

$$f\_l^{(2)} = \frac{3}{4} f\_l + \frac{1}{4} f\_l^{(1)} + \frac{1}{4} \delta t \, L[f\_l^{(1)}],$$

$$f\_{l+1} = \frac{1}{3} f\_l + \frac{2}{3} f\_l^{(2)} + \frac{2}{3} \delta t \, L[f\_l^{(2)}].\tag{9.74}$$

For more information regarding the Butcher tableaux representation, we refer the reader to Ref. [82].


#### 9 Quadrature-Based Lattice Boltzmann Models for Rarefied Gas Flow 295

The advection term is computed, as follows:

$$\frac{p}{m} \cdot \nabla f = \frac{p\_\chi}{m} \partial\_\chi f + \frac{p\_\chi}{m} \partial\_\chi f + \frac{p\_\varepsilon}{m} \partial\_\varepsilon f. \tag{9.75}$$

Since the flows considered in this chapter are effectively one-dimensional (being homogeneous with respect to the *y* and *z* axes), the discussion on the implementation of the WENO-5 scheme for the computation of the above derivatives will cover only the derivative with respect to the *x* coordinate. Considering that the spatial domain is discretised equidistantly with respect to the *η* coordinate Eq. (9.59), the derivative with respect to *x* can be written as:

$$\left(\frac{p\_{\chi}}{m}\partial\_{\chi}f\right)\_{s} = \frac{\mathcal{F}\_{s+1/2} - \mathcal{F}\_{s-1/2}}{\mathbf{x}\_{s+1/2} - \mathbf{x}\_{s-1/2}}.\tag{9.76}$$

The flux F*s*+1*/*<sup>2</sup> corresponding to the interface between the cells centred on *xs* ≡ *x(ηs)* and *xs*+<sup>1</sup> is computed in an upwind-biased approach using the WENO-5 algorithm [50, 80, 83], which we summarise below for the case when the advection velocity *px /m >* 0:

$$\mathcal{F}\_{s+1/2} = \overline{\omega}\_1 \mathcal{F}\_{s+1/2}^1 + \overline{\omega}\_2 \mathcal{F}\_{s+1/2}^2 + \overline{\omega}\_3 \mathcal{F}\_{s+1/2}^3. \tag{9.77}$$

The interpolating functions <sup>F</sup> *<sup>q</sup> <sup>s</sup>*+1*/*<sup>2</sup> (*<sup>q</sup>* <sup>=</sup> <sup>1</sup>*,* <sup>2</sup>*,* 3) are given by:

$$\begin{aligned} \mathcal{F}\_{s+1/2}^1 &= \frac{p\_\chi}{m} \left( \frac{1}{3} f\_{s-2} - \frac{7}{6} f\_{s-1} + \frac{11}{6} f\_s \right), \\ \mathcal{F}\_{s+1/2}^2 &= \frac{p\_\chi}{m} \left( -\frac{1}{6} f\_{s-1} + \frac{5}{6} f\_s + \frac{1}{3} f\_{s+1} \right), \\ \mathcal{F}\_{s+1/2}^3 &= \frac{p\_\chi}{m} \left( \frac{1}{3} f\_s + \frac{5}{6} f\_{s+1} - \frac{1}{6} f\_{s+2} \right), \end{aligned} \tag{9.78}$$

while the weighting factors *ωq* are defined as:

$$
\overline{\omega}\_q = \frac{\widetilde{\omega}\_q}{\widetilde{\omega}\_1 + \widetilde{\omega}\_2 + \widetilde{\omega}\_3}, \qquad \widetilde{\omega}\_q = \frac{\delta\_q}{\sigma\_q^2}. \tag{9.79}
$$

The ideal weights *δq* are:

*δ*<sup>1</sup> = 1*/*10*, δ*<sup>2</sup> = 6*/*10*, δ*<sup>3</sup> = 3*/*10*,* (9.80)

while the indicators of smoothness *σq* are given by:

$$
\sigma\_1 = \frac{13}{12} \left( f\_{s-2} - 2f\_{s-1} + f\_s \right)^2 + \frac{1}{4} \left( f\_{s-2} - 4f\_{s-1} + 3f\_s \right)^2,
$$



$$
\sigma\_2 = \frac{13}{12} \left( f\_{s-1} - 2f\_s + f\_{s+1} \right)^2 + \frac{1}{4} \left( f\_{s-1} - f\_{s+1} \right)^2,
$$

$$
\sigma\_3 = \frac{13}{12} \left( f\_s - 2f\_{s+1} + f\_{s+2} \right)^2 + \frac{1}{4} \left( 3f\_s - 4f\_{s+1} + f\_{s+2} \right)^2. \tag{9.81}
$$

In the case when one, two or all three of the *σq* indicators vanish, the computation of the functions *ωq* using Eq. (9.79) implies illegal division by zero operations. In this case, the weighting factors *ωq* can be computed directly as shown in Table 9.3. Alternatively, a small quantity *<sup>ε</sup>* <sup>10</sup>−<sup>6</sup> can be added to the *σq* functions. A more thorough discussion on the side effects of this approach can be found in Ref. [77].

# **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Index**

### **A**

Active Brownian motion configuration of, 215 finite-difference equations, 215 MSD, 216–218 self-propulsion speed, 215–216 Active Brownian particle (ABP) models, 240, 241 Active microrheology, 257 Active Ornstein-Uhlenbeck particle (AOUP) model, 241–242 Active particles, 128 ABP model, 241 active Brownian motion, 215–218 active pressure, 252–254 AOUP model, 241–242 chiral active Brownian motion, 219–220 colloids, 233–234 elastic dumbbells, 255–256 examples of, 211–212 external fields, 224–225 Gaussian noise reorientation model, 219–221 interactions among particles, 225 short-range aligning interactions, 228–230 steric interactions, 226–227 Vicsek model, 227–228 Janus particles, 221 living crystals, 231–233 multiplicative noise, 230–231 *N* noninteracting active particles, 256–257 non-spherical particles angular increments *Δθ*, 222 diffusion matrix, 221–222

finite-difference formalism, 222 increments *Δ*r, 222 rotation matrix group, 222–223 roto-translational, 222 skew-symmetric matrix, 223–224 run and tumble model, 218–219, 240–241 time-dependent external potential auxiliary function, 261 average drag force, 262–264 Brinkman hierarchy, 259 continuity equation, 259 density profile, 262 effective potential, 258 Hermite function, 260–261 inhomogeneous equation, 261–262 momentum balance equation, 259 non-dimensional variables, 258–259 numerical simulations, 262 phase-space distribution, 261 potential barrier, 257 self-propulsion, 258 static potentials, 259–260 trial distribution, 260 Active pressure, 252–254 AD, *see* Artificial diffusivity Adsorption, activation energy of, 121–122 Aligning interactions, 225–228 Amphiphilicity, 96–97, 105–108 Angular velocity, 220 Anomalous dispersion CTRW models, 148–150 fractional advection–dispersion equations, 147 mechanical dispersion, 147 MRMT approach, 150–151

© The Editor(s) (if applicable) and The Author(s) 2019 F. Toschi, M. Sega (eds.), *Flowing Matter*, Soft and Biological Matter, https://doi.org/10.1007/978-3-030-23370-9

Anti-solvent, 103 AOUP model, *see* Active Ornstein-Uhlenbeck particle model Artificial diffusivity (AD), 7 Asymmetric Janus particles, 97–98 Augmented Lagrangian algorithm, 9

### **B**

Bancroft rules, 123 Beris–Edwards model, 70–71 Bimetallic Janus particles, 99 Bingham fluid, 3 Bingham number, 10 Boltzmann–BGK equation, 294 Boltzmann distribution, 246 Born–Green–Yvon (BGY) hierarchy, 250–252 Brinkman hierarchy, 259 Brownian dynamics active particles (*see* Active particles) features, 211 passive Brownian motion Einstein's fluctuation–dissipation relation, 213 Euler–Maruyama scheme, 213 finite-difference approach, 214–215 Langevin equation, 214 non-negligible effect, 212 particle orientation, 213 translational and rotational erratic motions, 213 Bulk diffusivity, 108

### **C**

Cahn–Hilliard/Navier–Stokes model, 22 Cahn number, 24 Capillary electro-jetting methods, 101–102 Carreau fluid, 5 Chiral active Brownian motion, 219–220 Commutativity, 222 Complex fluids force coupling method, 25–26 Herschel–Bulkley formula, 3, 4 macroscopic approaches (*see* Eulerian/Eulerian methods, macroscopic approach) microscopic approaches Eulerian/Eulerian methods (*see* Eulerian/Eulerian methods, microscopic approach) Eulerian/Lagrangian methods (*see* Immersed boundary methods (IBM))

microstructure, 1 point particle method, 25 rheology macroscopic behaviour, 1, 2 Newtonian and non-Newtonian rheology, 2–4 volume of fluid tensorial penalty method, 26 Consistency index, 5 Continuous time random walk (CTRW) models, 148–150 Continuum surface force (CSF) model, 20 Couette flow, 2, 3 diffuse reflection boundary conditions, 287 equidistant grid discretisation, 287–288 finite-difference scheme, 288 geometry of, 286–287 Knudsen layer, 287–289 mass conservation, 288–289 reduced distributions, 281–282, 288 velocity profile, 289–290

### **D**

Damköhler number, 156, 162 Darcy-scale reaction–dispersion models, 156 Darcy's law, 139–142 Derjaguin–Landau–Verwey–Overbeek (DLVO) theory, 117 Dilatant shear, 2 Dipole–dipole repulsion, 117 Dirac delta function, 12, 13 Direct numerical simulation (DNS), 195 Direct simulation Monte Carlo (DSMC) technique, 271, 292 Discrete unified gas-kinetic scheme (DUGKS), 271 Discrete velocity models (DVMs), 271–272 Divergence-free velocity field, 20

### **E**

Effective active colloids, 220 Effective potential energy, 249 Electrodynamic co-jetting methods, 102 Ericksen–Leslie–Parodi (ELP) approach, 66–70 Ericksen stress tensor, 66 Eulerian/Eulerian methods, macroscopic approach fluid–structure interaction, 11–12 inelastic shear-thinning/shear-thickening fluids, 4–5

viscoelastic fluids conformation tensor, 7–8 finite elastic non-linear elastic model, 6 global AD, 7 Kelvin–Voigt model, 6 Oldroyd-B model, 6 polymeric stress tensor, 7 strain rate dependence on time, 5 Weissenberg number, 7 WENO schemes, 7 viscoplasticity augmented Lagrangian algorithm, 9 Bingham number, 10 elastoviscoplastic fluids, 6, 10 general Saramito model, 10 Oldroyd viscoelastic model, 10 regularisation approach, 9 Uzawa algorithm, 9 yield stress, 8–9 Eulerian/Eulerian methods, microscopic approach LS method, 18, 21–22 phase-field methods, 22–25 VOF method, 18–20 Eulerian framework advantage of, 202–203 Cartesian grid, 202 Lagrangian frame of reference, 206 limitations, 205–206 results, 203–205 second-order moments, 202–203 separation distance, 205–206 statistical convergence, 203 third-order correction, 202

### **F**

Fast marching method (FMM), 21 Fickian dispersion hydrodynamic dispersion, 146 inverse Gaussian distribution, 145–146 macro-dispersion, 146–147 one-dimensional transport, 145 Fifth-order weighted essentially nonoscillatory (WENO-5) scheme, 294–295 Finite elastic non-linear elastic (FENE) model, 6 Finite elastic non-linear extensibility-Peterlin (FENE-P) model, 6 Flash nanoprecipitation (FNP), 102 Flory–Huggins interaction parameter, 101, 102 Force coupling method, 25–26

Four-frame best estimate (4BE) method, 193–195 4BE with nearest neighbour initialisation (4BE-NN), 194 Four-frame minimal acceleration method (4MA), 193 Fox method, 247 Front-tracking methods, 16–18 Full-range Hermite polynomials, 275–277 *F*-values, 108

### **G**

Gaussian coloured noise (GCN) active force, 241–242 balance condition, 266–267 entropy production and heat flux, 264–266 *See also* Unified coloured noise approximation (UCNA) Gaussian noise model, 219–221 Ghost fluid method (GFM), 22 Gibbs free energy, 110–116

### **H**

Half-range Hermite polynomial, 278–280 Hard-sphere correction, 226–227 Helmholtz equation, 5 Hermite function, 260–261 Herschel–Bulkley formula, 3, 4 Hexadecane–water interface, 117 Hierarchical multi-particle nematic colloidal structure, 84 High-order upstream-central (HOUC) scheme, 21 Homogeneous particles (HPs) amphiphilicity, 96–97 contact angle and interfacial adsorption energies, 110–116 Pickering emulsions, 97 Hopf link colloidal particle, 86, 87 H-theorem, 249–250 Hydrodynamic similarity, 37 Hypothetical amphiphilic dumbbell Janus particle, 107

### **I**

Immersed boundary methods (IBM) advantages and disadvantages, 12 Dirac delta function, 12, 13 Eulerian grid, 12, 13 feedback forcing, 14

Immersed boundary methods (IBM) (*cont.*) front-tracking methods, 16–18 Lagrangian grid, 12, 13 rigid particles, suspension of, 14–16 Incomplete mixing, 156–157 Inward retraction of Lagrangian grid, 16

### **J**

Janus balance, 106 Janus particles (JPs) at air–water interface, 97 amphiphilicity, 96–97 asymmetric, 97–98 definition, 96 Gibbs free energy, 110–116 interfacial activity and adsorption adsorption kinetics, 108–109 bulk diffusivity, 108 contact angle and interfacial adsorption energies, 110–116 dynamic surface tension, 108 IFT *vs.* time, 109 inter-particle interaction, 117–118 magnitude of *Δ*IFT, 110 spontaneous adsorption at interfaces, 118–123 nanomotors, 130–131 Pickering emulsions, 97 advantage, 123 catastrophic phase inversion, 126 coalescence, 123 formulation, 124, 125 *vs.* HPs, 124 Ostwald ripening, 123 particle affinity, 123–124, 126 polystyrene/JNP colloidosomes, 126 stimuli-responsive Pickering emulsions, 127–128 self-assembly, 128–130 synthetic preparation routes masking and asymmetric modification, 99–100 microfluidic and capillary electro-jetting methods, 101–102 polymer co-precipitation and phase separation, 102–103 seeded emulsion polymerisation and phase separation, 100–101 tuning surface polarity aspect ratio and HLB values, 107, 108 homologous series, 103 PS/P(3-TSPM) JPs, 103, 104

surface polarity contrast, 105–108 types, 96 *J* -value, *see* Janus balance

### **K**

Kelvin–Voigt model, 6 Knot-shaped colloidal particles, 86 Knudsen layer, 287–289 Kullback–Leibler entropy, 250

### **L**

Lagrangian framework assumptions, 201 filtering processes, 201, 202 first-order increments, 197, 198, 201 noiseless velocity and acceleration, 197–198 noise magnitude, 202 results, 198–201 second-order increments, 197, 198, 201 Lagrangian points, 14 Lamellar mixing, 153–155 Landau–de Gennes free energy approach elastic free energy, 57–58 electric field effects, 59–60 magnetic field effects, 60–61 phase transition, 55–56 surface anchoring, 58–59 volume density, 55 Lattice Boltzmann (LB) models one-dimensional model Boltzmann–BGK equation, 275 full-range Gauss–Hermite quadrature, 275–277 half-range Gauss–Hermite quadrature, 277–280 quadrature points, 275 simulation results Butcher tableaux representation, 294 Couette flow, 286–290 force-driven Poiseuille flow, 290–293 indicators of smoothness, 295–296 interpolating functions, 295 third-order Runge–Kutta TVD integrator, 294 time-stepping algorithm, 294 weighting factors, 295–296 WENO-5 scheme, 294–295 three-dimensional momentum space equilibrium distribution, 284–285 kernel, 283

macroscopic moments, 285 momentum vector, 283 non-dimensionalisation procedure, 285–286 reduced distributions, 281–283 Level-set (LS) method, 18, 21–22 Lipschitz-continuous function, 18

### **M**

Macroscopic effective reaction rate, 162 Macroscopic transport models anomalous dispersion CTRW models, 148–150 fractional advection–dispersion equations, 147 mechanical dispersion, 147 MRMT approach, 150–151 Fickian dispersion hydrodynamic dispersion, 146 inverse Gaussian distribution, 145–146 macro-dispersion, 146–147 one-dimensional transport, 145 in heterogeneous media advection–dispersion equation, 144, 145 isotropic medium, 145 Péclet numbers, 145 mixing and chemical reactions diffusion and dispersion, 152 in heterogeneous porous media, 155–156 incomplete mixing, 156–157 lamellar mixing, 153–155 mixing-limited chemical reactions, 157–158 scalar dissipation and concentration statistics, 153–154 and spreading in porous media, 152–153 Markovian approximation, 242–243 Mean square displacement (MSD), 216–218 Microfluidic co-flow system, 101 Microposts, 84, 85 Minimum energy dissipation theorem divergence-free vector field, 42 extensive energy dissipation rate, 42–43 inclusion monotonicity principle, 43–45 intensive energy dissipation rate, 42 Mixed quadrature LB models, 282–283 Mixing and chemical reactions diffusion and dispersion, 152 in heterogeneous porous media, 155–156 incomplete mixing, 156–157

lamellar mixing, 153–155 mixing-limited chemical reactions, 157–158 scalar dissipation and concentration statistics, 153–154 and spreading in porous media, 152–153 Modified initialisation (MI) method for 4BE (4BE-MI), 194 MSD, *see* Mean square displacement Multi-dimensional THINC (MTHINC) method, 19 Multidirect forcing scheme, 15 Multi-Gaussian random field, 142 Multi-rate mass transfer (MRMT) approach, 150–151 Multivariate Gaussian distribution, 244

### **N**

Nanomotors, 130–131 Navier–Stokes/Cahn–Hilliard model, 24, 25 Navier–Stokes equation, 22–24, 36–37 Nematic colloids, 77–78 assembly and self-assembly, 82–86 complex-shaped and topological colloids, 86–88 interparticle interactions bubble-gum configuration, 82 dipolar nematic colloids aggregation, 81 director field configurations, 79–81 elastic quadrupoles, 81 escaped defect lines, 82 hedgehog defect, 79 homeotropic anchoring, 82 long-range interactions, 79 micrographs, 79, 80 nematic configurations, 82, 83 polarisation micrographs, 80 Saturn ring defect, 79, 82 single spherical particle, 78–79 Nematic fluids active fluids, 51, 52 active matter, 53 colloids (*see* Nematic colloids) equilibrium nematic configurations, 52 field structures, 53 Landau–de Gennes free energy approach elastic free energy, 57–58 electric field effects, 59–60 magnetic field effects, 60–61 phase transition, 55–56 surface anchoring, 58–59 volume density, 55

Nematic fluids (*cont.*) microfluidics colloidal particles in, 77 flows in channels, 73–74 junctions, 75–77 rheological properties, 73 nematic ordering, effects of, 52 nematic order parameters, 54–55 nematodynamics, 52 active nematics, 71–72 Beris–Edwards model, 70–71 Ericksen–Leslie–Parodi approach, 66–70 Ericksen stress tensor, 66 flow field, 65 incompressibility condition, 65 Qian–Sheng model, 71 stress tensor, 65 nematogens, 52 orientational order, 51 Q-tensor, velocity effects on, 52 topological defects line defects, 62–63 regular/irregular structures, 53 shape of, 52 singular point defects, 61–62 topological theory, 64–65 umbilic defects, 63–64 Nematic order parameters, 54–55 Nematodynamics, 52 active nematics, 71–72 Beris–Edwards model, 70–71 Ericksen–Leslie–Parodi approach, 66–70 Ericksen stress tensor, 66 flow field, 65 incompressibility condition, 65 Qian–Sheng model, 71 stress tensor, 65 Nematogens, 52 Newton–Euler equations, 15 Newtonian rheology, 2–4 Noise reduction Eulerian structure functions advantage of, 202–203 Cartesian grid, 202 Lagrangian frame of reference, 206 limitations, 205–206 results, 203–205 second-order moments, 202–203 separation distance, 205–206 statistical convergence, 203 third-order correction, 202 experimental signal derivatives, 196

Lagrangian auto-correlation functions assumptions, 201 filtering processes, 201, 202 first-order increments, 197, 198, 201 noiseless velocity and acceleration, 197–198 noise magnitude, 202 results, 198–201 second-order increments, 197, 198, 201 optical calibration, 178 second-order Taylor expansion, 196–197 temporal increment, 196 Non-aligning interactions, 225 Non-commutativity, 223 Non-Newtonian rheology, 2–4

# **O**

Oldroyd-B model, 6 Oldroyd viscoelastic model, 10 One-dimensional (1*D)* quadrature-based LB model Boltzmann–BGK equation, 275 full-range Gauss–Hermite quadrature, 275–277 half-range Gauss–Hermite quadrature, 277–280 quadrature points, 275

### **P**

Particle image velocimetry (PIV), 177–178 Particle-to-fluid density ratio, 25 Particle tracking velocimetry (PTV) 4BE method, 193–195 model-free calibration method accuracy, 185–186 advantages, 185 air–water interface, 185 dot centres detection, 182 image acquisition and spatial detection, 186 pixel coordinate system, 180 pixel-line interpolant, 182 *a priori* model, 180 protocol, 180–181 real-world coordinate system, 180–181 Scheimpflug mounts, 185 stereo-matching, 179, 183 Tsai model, 179, 184–186 2D plane-by-plane transformation, 182 noise reduction (*see* Noise reduction) shadow particle tracking velocimetry, 186

experimental setup, 187–189 flow measurements, 191–193 stereo-matching, 189–191 Passive Brownian motion Einstein's fluctuation–dissipation relation, 213 Euler–Maruyama scheme, 213 finite-difference approach, 214–215 Langevin equation, 214 non-negligible effect, 212 particle orientation, 213 translational and rotational erratic motions, 213 Passive particles MSD, 217 passive Brownian motion, 212–215 spherical particles, 222 steric interactions, 226–227 PDIPAEMA/P(3-TSPM) JPs, 127 Péclet numbers, 145 Phase-field methods, 22–25 Phase-space distribution, 243–244 Photonic crystals, 82 Pickering emulsions, JPs, 97 advantage, 123 catastrophic phase inversion, 126 coalescence, 123 formulation, 124, 125 *vs.* HPs, 124 Ostwald ripening, 123 particle affinity, 123–124 polystyrene/JNP colloidosomes, 126 stimuli-responsive Pickering emulsions, 127–128 PIV, *see* Particle image velocimetry PMMA/PS JPs, 101 PMMA/PtBMA biphasic Janus nanoparticles, 102 Point particle method, 25 Poiseuille flow boundary conditions, 290–291 geometry of, 290 Knudsen paradox, 291–292 microfluidics specific effect, 292 moments method approach, 292 parameter values, 292–293 reduced distributions, 281–282 validation of, 292 Poisson equation, 20 Poisson process, 218–219 Polymer co-precipitation, 102–103 Polystyrene/JNP colloidosomes, 126 Probability distribution, 218–219 Pseudoplastic fluid, 2

PS/P(3-TSPM) JPs, 103, 104 PS/PMMA JPs, 101 PS/PPA JPs, 101 PS/PtBA seeds, 101 PtBA/PS JPs, 101 PTV, *see* Particle tracking velocimetry

### **Q**

Qian–Sheng model, 71

# **R**

Rarefied gases challenge for, 272 disadvantage of, 272 generalities, 273–275 LB models (*see* Lattice Boltzmann models) Representative elementary volume (REV), 138–139 Reversibility of fluid flows examples, 38–40 irreversible trajectories, 40–41 Reynolds number, 37 Rheopectic fluids, 4 Rodrigues formula, 224 Run and tumble model, 218–219, 240–241

### **S**

Scalar dissipation, 153–154 Scallop theorem, 39 Seeded emulsion polymerisation, 100–101 Shannon entropy, 247–248 Sharp-interface limit, 24 Shear-thickening fluid, 2, 5 Shear-thinning fluid, 2, 4–5 Simple line interface calculation (SLIC) method, 19 Stimuli-responsive Pickering emulsions, 127–128 Stokes approximation, 37 accelerating fluid, 47–48 Brinkman equations, 46 inertial flow, 47 linear equations, 46 Oseen equations, 46 Stokes paradox, 45–46 Stokes diffusion law, 108 Stokes drag coefficient, 25 Stokes flows minimum energy dissipation theorem divergence-free vector field, 42 extensive energy dissipation rate, 42–43 Stokes flows (*cont.*) inclusion monotonicity principle, 43–45 intensive energy dissipation rate, 42 Navier–Stokes equations, 36–37 non-dimensional steady Stokes equations, 37 no-slip boundary condition, 37–38 reversibility of fluid flows examples, 38–40 irreversible trajectories, 40–41 solid harmonics, 38 spherical harmonics, 38 Stokes approximation accelerating fluid, 47–48 Brinkman equations, 46 inertial flow, 47 linear equations, 46 Oseen equations, 46 Stokes paradox, 45–46 unbounded flow problems, 38 Swim pressure, 253

### **T**

Tangent of hyperbola for interface capturing (THINC) method, 19 Taylor-based Reynolds number, 187–188 Third-order Runge–Kutta TVD integrator, 294 Thixotropic fluids, 4 Three-dimensional (3*D)* momentum space equilibrium distribution, 284–285 macroscopic moments, 285 momentum vector, 283 non-dimensionalisation procedure, 285–286 reduced distributions, 281–283 Time-dependent Hamilton–Jacobi equation, 21 Tracking error, 195 Tsai model, 179, 184–186 Tubular pinch effect, 47 Turbulence models, 138 *See also* Particle tracking velocimetry (PTV)

### **U**

Umbilic defects, 63–64 Unified coloured noise approximation (UCNA) active particles (*see* Active particles) BGY hierarchy, 250–252 entropy production, 247–249 Fox method, 247 functional calculus, 246–247

GCN balance condition, 266–267 entropy production and heat flux, 264–266 H-theorem, 249–250 kinetic approach, 243–245 Markovian approximation, 242–243 space dependent friction matrix, 242 stationary solution, 245–246 underdamped Langevin equation, 242 velocity correlations, 254–255 Unit normal vector, 19 Upscaling flow assumptions and limitations non-equilibrium and lack of scale separation, 163 suspensions and interfacial flows, 163 homogenisation physical interpretation and limitations, 168–169 reaction–diffusion in perforated domain, 165–167 stochastic homogenisation, 163 two-scale expansions, 164–165 macroscopic transport models (*see* Macroscopic transport models) multiphase and surface processes dynamic conditions, 159 mass and heat transfer, 160–162 mixed conditions, 159 simple conditions, 159 permeability, 138 random field, 138 through heterogeneous media covariance matrix, 142 multi-Gaussian random field, 142 permeability, 143 perturbation theory, 144 REV, 138–139 spatial stochastic process, 142 steady state Darcy flow equation, 142 through porous media Darcy's law, 139–140 extensions of Darcy's law, 140–142 REV, 138–139 volume/ensemble averaging, 169–170 Uzawa algorithm, 9

### **V**

Van der Waals interaction, 117 Vicsek model, 227–228 Viscoelastic fluids, 4 conformation tensor, 7–8

### Index 309

finite elastic non-linear elastic model, 6 global AD, 7 Kelvin–Voigt model, 6 Oldroyd-B model, 6 polymeric stress tensor, 7 strain rate dependence on time, 5 Weissenberg number, 7 WENO schemes, 7 Viscoplasticity augmented Lagrangian algorithm, 9 Bingham number, 10 elastoviscoplastic fluids, 6, 10 general Saramito model, 10 Oldroyd viscoelastic model, 10 regularisation approach, 9 Uzawa algorithm, 9 yield stress, 8–9 Volume/ensemble averaging, 169–170

Volume of fluid (VOF) method, 18–20 Volume of fluid tensorial penalty method, 26 von Kármán flow, 187–188

### **W**

Weighted essentially non-oscillatory (WENO) scheme, 7, 21 Weighted linear interface capturing (WLIC) method, 19 Weissenberg number, 7 Wettability bulk-to-surface diffusion, 108 between polymers, 101

### **Y**

Yield stress fluid, 3